-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add loader for candombe_beat_downbeat #553
Conversation
Thank you for your contribution to mirdata, and sorry for the delay. Before we start the formal review, a couple of high-level comments from our side:
Let us know if you have any questions! |
Hi there @harshpalan, and sorry for the delay. I've made the changes you requested, and I'm currently testing the dataloader to make sure everything is working properly. I do have one question: I see the formatting test is not passing, however, when I run black locally I see that no changes are needed. Any idea of what might be going on? Thanks! |
Could we resume from here? Any help would be much appreciated to solve these issues and get the dataset incorporated into mirdata. Thank you! |
Hey @mrocamora, thanks for bringing this to our attention again. @harshpalan is looking into it and will come back to you soon. We suspect that the formatting issue can be related to the black version, but we're a bit unsure about the errors appearing in the other loaders. Will keep you posted |
Hi @magdalenafuentes and @harshpalan, I hope all is well with you. Just wanted to say I'm also over here if you need me to modify something. Thanks! |
Hey @jimearruti @mrocamora, thanks for this PR and sorry for the slow response on our side. We've recently migrated soundata to GitHub actions and updated Python and packages version, and we're looking into doing the same in mirdata in the following couple of weeks, so we're holding a bit on the other PRs for the moment to incorporate those changes first and then have the PRs tested with the updated pipeline. We'll make sure to look at your PRs as soon as we finish that up so you'll hear from us soon. Thanks for your patience! |
hey @jimearruti, took a look at the PR. Looks good :) I tried to commit to your fork but I don't have permission. I'd would like to ask you to add some modifications to your branch to see if we can get over these problems for Py3.8 and formatting and we can move on with the loader reviewing and merging soon. First, you'd need to run black again on your loader main file ( Second, presumably the problems in python 3.8 are given the newest versions of numpy that are installed. Could you please edit the setup file to update the following dependencies as attached here:
Let's see if these minor changes help on getting the tests passed and we can move on. Let us know how that goes! :) |
Hi @genisplaja, thank you very much for the help! I ran a clean installation with the changes in setup.py you proposed, but I got some incompatibility issues:
I will try try again with the prior version of setup.py, and if that is still not working I'll try and search for a more suitable version for numpy. I do have some follow up questions:
|
hey @jimearruti, sorry for the back and forth, and thanks for taking a look at my suggestions. First of all, just wanted to let you know that @guillemcortes is working on PR #596, which should help on going through the problems in the testing system now. We have reached the conclusion that focusing on PR #596 is the fastest way to move on with the blocked PRs, so we will be working on that this week, aiming at getting it merged by the beginning of next week ad hopefully, that will unblock many PR including yours. Will keep you posted on that, thanks again for your patience and time. Regarding your questions:
|
Great @genisplaja, thank you for all the information. I'll be over here when that PR is merged :) |
Codecov Report
@@ Coverage Diff @@
## master #553 +/- ##
==========================================
+ Coverage 96.96% 97.08% +0.11%
==========================================
Files 58 62 +4
Lines 6990 7277 +287
==========================================
+ Hits 6778 7065 +287
Misses 212 212 |
Hi @jimearruti, thanks for the patience! PR #596 has been merged. I've updated your branch to match master and the good news is that tests are passing except for the formatting test, which should be easy to solve. Just install |
Hi @genisplaja and @guillemcortes, hope all is well over there. I formatted the code with black and tests seem to be passing. Let me know if you need anything else on my side 😄 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @jimearruti! Thanks for your patience and nice work :) Thanks for contributing to mirdata! I'm just requesting a few minor changes, mostly documentation stuff. The rest looks good and ready to me, tests are passing and coverage is 100%.
mirdata/datasets/candombe.py
Outdated
class Track(core.Track): | ||
"""candombe track class | ||
# -- YOU CAN AUTOMATICALLY GENERATE THIS DOCSTRING BY CALLING THE SCRIPT: | ||
# -- `scripts/print_track_docstring.py my_dataset` | ||
# -- note that you'll first need to have a test track (see "Adding tests to your dataset" below) | ||
|
||
Args: | ||
track_id (str): track id of the track | ||
|
||
Attributes: | ||
audio_path (str): path to audio file | ||
annotation_path (str): path to annotation file | ||
# -- Add any of the dataset specific attributes here | ||
|
||
Cached Properties: | ||
annotation (EventData): a description of this annotation | ||
|
||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You would need to update the docstring here, with the attributes and cached properties of your Track class. You would need to basically include audio and beats path as attributes, then beats as cached property.
hey @jimearruti deleted some suggested changes so please stick to these visible now :) Thanks!! |
@genisplaja let me know if everything is ok now or if you need any more changes. |
Candombe dataset
Description
Please include the following information at the top level docstring for the dataset's module mydataset.py:
Dataset loaders checklist:
scripts/
, e.g.make_my_dataset_index.py
, which generates an index file.mirdata/indexes/
e.g.my_dataset_index.json
.mirdata/my_dataset.py
tests/datasets/
, e.g.test_my_dataset.py
docs/source/mirdata.rst
anddocs/source/table.rst
tests/test_full_dataset.py
on your dataset.If your dataset is not fully downloadable there are two extra steps you should follow:
pytest -s tests/test_full_dataset.py --local --dataset my_dataset
once on your dataset locally and confirmed it passesPlease-do-not-edit flag
To reduce friction, we will make commits on top of contributor's pull requests by default unless they use the
please-do-not-edit
flag. If you don't want this to happen don't forget to add the flag when you start your pull request.As discussed with @magdalenafuentes, here's the PR regarding the dataset for "Uruguayan candombe drumming - beat and downbeat tracking data set" by Martin Rocamora and Luis Jure is now hosted in Zenodo: https://zenodo.org/record/6533068/
This PR includes a dataloader and everything needed as described in the Contributing page.