-
-
Notifications
You must be signed in to change notification settings - Fork 655
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changing speeds for different languages #4738
Comments
Comment 1 by jteh on 2014-12-21 22:49 At least for now, you might want to consider using manual configuration profiles, which allow you to set the exact settings you need. |
Comment 2 by steverep80 on 2015-01-15 18:49 The way to solve any annoyance to other users is simply to have a global set of pitch/rate/volume, and an option, such as a checkbox, to override these inherited settings for particular voices of the user's choice. |
CC @mrdin8877, @LeonarddeR. Even if we had separate rates for individual voices, it's worth noting that this would not take effect for automatic language/dialect switching. NVDA just tells the synthesiser about language/dialect changes, so there's no way for NVDA to know what voice will actually get chosen and thus no way for it to reset the rate according to your configuration. The only way to make this work the way you expect would be for NVDA to explicitly know which voices should be chosen for which languages. That is something I think should be managed by configuration profiles, thus allowing you to configure other things for each language as well instead of just the voice and rate. P3 because there is definitely a compelling use case for this, but it requires speech refactor (#4877) before it can even be considered and will then be considerable implementation effort beyond that. |
General question: should priorities for tickets dependent on speech refactor be re-assessed given that speech refactor, then a huge hurdle, is now done and in place? |
We've had a couple of questions about this recently - specifically asking how to choose the voice for automatic language detection (eg I'm reading in English, but NVDA encounters text in French in a document - how can I choose which French voice gets used if my synth has several, and set the speed). I wonder if being able to create profiles for languages would solve these issues (eg when creating a profile, under "Use profile for", could be "Language" which is a drop down and you can select the language) - manually activate, set what you like, then deactivate - as you can for say all. When NVDA encounters text in that language, it first loads the profile, if one exists, before handballing to the synth. One question to answer then would be then, if I have a profile for Word, and a profile for French, when I encounter text in French, in Word, what gets priority? |
Just keep in mind that various profiles may be active at the same time, e.g. Word and SayAll. Thus you may define some options in the Word application profile, e.g. report font size in doc formatting settings, and a specific voice/TTS, e.g. eSpeak instead of default OneCore, in the SayAll profile. If you then run SayAll in Word, you should hear eSpeak reading and have the font size reported. Having this in mind, we may just imagine a third type of profile: a language profile that could be triggered manually or when NVDA encounters text tagged with a different language. Let's hope this issue becomes a reality soon! |
I vigorously agree with this feature request. Among multi-lingual folks, my sense is that most people have asymmetric listening skills in different languages. In my case, I am fluent in English, Hindi, and Gujarati. My preferred speech rate for English is highest because I interact with so much English text all the time that that is a natural consequence. I speak Hindi as much but don’t consume Hindi text on my computer as often, meaning that I am not attuned to hearing Hindi speech synthesis as much so have it cranked up but to a lesser degree. Gujarati is a notch lower further still. The new language I am picking up, Spanish, requires me to slow down even more. What that means is that it seems perfectly natural for users to want to set varying speech rates by language. For me personally, it would be a significant productivity boost because it would negate the need to routinely scramble to alter speech rates as well as allow me to better track my progress (and improve?) in terms of listening skills across languages. |
Reported by burakyuksek on 2014-12-21 17:52
I use vocalizer and I work with 2 different languages, English and Turkish. For example when I come accross a tweet in English and I am using Turkish, I switch to English and try to view the tweet. The problem is that Turkish speaks slower at the speed of 85%, but English speaks faster and I need to slow it down to be able to understand the tweet. Can we please have a feature that can assign seperate speed values for each language? thanks.
The text was updated successfully, but these errors were encountered: