This is the static site for the HEAR Benchmark website and submission repository for new entries to the HEAR leaderboard. To find out more about HEAR, please visit the website or read the paper.
If you have any questions about submitting to the HEAR benchmark please open a new issue in this repository. If you have questions that are specific to using the hear-eval-kit, please open a new issue in that repository.
Once you have developed an audio embedding model and evaluated it using the benchmark suite of tasks with the hear-eval-kit, submit the results here.
-
Create a github account
-
Fork the HEAR Benchmark Site repository
-
Clone this new repository into your desktop environment
git clone https://github.com/YOUR-USERNAME/hear-benchmark
-
Create a branch using your team name
git checkout -b TEAM-NAME
-
Append your test score results to the file
docs/leaderboard.csv
. The test scores are output byhear-eval-kit
in a separate json file for each task. This example google colab demonstrates how to run evaluation and find the final test scores. You will also need to include a model name and URL for your work in the CSV file. Your model name should be your team/institution name and short name describing your model, for example: HEAR Baseline. The included URL can be a link to a GitHub repo or paper for your work. -
Commit updated leaderboard.csv file
git commit -a -m "Some comment"
-
Push to github
git push origin TEAM-NAME
-
Issue a pull request (PR) with the title containing your team name and follow the template that will appear once you open the pull request. Within the template you will be required to fill out details related to your submission including a brief description of your model and type of training data used (speech/broad/music).
-
Once the pull request is accepted and merged your results will appear on the leaderboard on the website.
See Jekyll Docs for instructions on building on your local machine for development.
cd docs/
bundle exec jekyll serve