You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a KenLM scoring integrated at Line 96. The performance on my test set (Both LM and Test set are LibriSpeech based) is worse than not using an LM at all. I am scoring only at space, multiplying the log probability (Converted from log10) by Alpha and also compensating with bonus term by adding (beta * log(word count in prefix)). I am applying this only to "not blank" probability. I have no success. Has anyone achieved success by integrating LM scoring?
I used my test set and Language model with Paddle Paddle decoder with same acoustic model and there was a 6% improvement in WER. They have a trie based LM aided by WFST correction along with this beam search algo. I would appreciate any pointers or help here. Thanks!
The text was updated successfully, but these errors were encountered:
I have a KenLM scoring integrated at Line 96. The performance on my test set (Both LM and Test set are LibriSpeech based) is worse than not using an LM at all. I am scoring only at space, multiplying the log probability (Converted from log10) by Alpha and also compensating with bonus term by adding (beta * log(word count in prefix)). I am applying this only to "not blank" probability. I have no success. Has anyone achieved success by integrating LM scoring?
I used my test set and Language model with Paddle Paddle decoder with same acoustic model and there was a 6% improvement in WER. They have a trie based LM aided by WFST correction along with this beam search algo. I would appreciate any pointers or help here. Thanks!
The text was updated successfully, but these errors were encountered: