-
-
Notifications
You must be signed in to change notification settings - Fork 81
Speakers id. #1
Comments
Hi,
It is possible for some smaller subsets to say that they come from one speaker. This mostly is true only for audio books.
But as a general rule - we aimed to gather as much diverse data as possible.
Now there is a field speaker set in the metadata - but it's mostly useless now.
I will think about marking some parts of the dataset as reliably having the same speaker.
…On April 27, 2019 6:02:11 PM GMT+03:00, i7p9h9 ***@***.***> wrote:
Is it possible extract speakers id from your dataset to using for
speaker recognition tasks?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
#1
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
|
We are planning to share a much larger dataset based on audio-books |
It would be great if the data came with dedicated directories for each speaker e.g.
because it makes sense to separate speakers during training and testing. Not just for speaker recognition but also for STT tasks. However, |
Hi! Doing exactly this is not feasible unfortunately due to the nature of the dataset (zero money investment into annotation). But we could share speakers privately as meta data for a very limited subset of data if this helps. Mostly books. |
I see. Well, my workaround here is throwing everything uncertain into the If I may ask, what kind of word error rate (WER) did you get on the entire |
We have a small subset of the data (15 hours) manually annotated - we will be posting it soon enough
Sorry for a late reply, but please refer to a ticket #5 #7 There have been reports that if you use esp-net w/o data with bad annotation, you will get a much better result It will be the foremost focus of our future work - seeding out the bad data |
Hi, stefan: You mentationed that you have trained ASR system on common voice russia, could you share the lastest common voice russia WER performance? I do not learn a lot about russian language, and train a russian ASR system with little 60h data with RU- common voice data, now the WER is about 40% with a chain model with kaldi toolkit even with a test set text LM, do you think it's normal? I haven't found any bench mark on common voice russia, do you think it's a normal performance? I find you often evalute russia ASR with CER, wheather it is more common on russian ASR target? Thanks a lot !!! |
Is it possible extract speakers id from your dataset to using for speaker recognition tasks?
The text was updated successfully, but these errors were encountered: