Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a script to toggle language detection #10253

Closed
CyrilleB79 opened this issue Sep 20, 2019 · 9 comments · Fixed by #13680
Closed

Add a script to toggle language detection #10253

CyrilleB79 opened this issue Sep 20, 2019 · 9 comments · Fixed by #13680
Milestone

Comments

@CyrilleB79
Copy link
Collaborator

Is your feature request related to a problem? Please describe.

In some cases, I need to deactivate auto-language switching:

  • Sometimes webpages are tagged with wrong language, e.g. written in French but tagged as English content. Probably due to a webmaster using a template or a default setting in English.
  • Sometimes I prefer reading English-tagged content with my default voice native language (French), e.g. math content.

On the opposite, sometimes I prefer have auto-language switching on to get used to real language pronunciation.
Pressing NVDA+Ctrl+V to go to voice settings panel, then tabbing in the dialog box to go to auto language switching option, activate it and press Enter is a bit long.

Describe the solution you'd like

Add a script to toggle automatic language switching. Such scripts exist for many other setting panels options, so this would be coherent. The script may be unassigned and could be assigned by the user as already done for many other such setting toggle scripts.
The script could also include auto dialect switching option and toggle between 3 states:

  • Language and dialect detection off
  • Language detection on, dialect off
  • Language and dialect detection on

Note that the 4th possible state (language detection off and dialect on) does not make sense to me.

Describe alternatives you've considered

  • Alternative 1: Add an accelerator on automatic language switching option in the speech setting panel. Even if it allows accelerating a bit the process, it is not as fast as a dedicated script.
  • Alternative 2: Toggle only auto language switching option, not auto dialect switching.

Additional context

I have a personal script that does the job here:
https://raw.githubusercontent.com/CyrilleB79/NVDAScripts/master/globalPlugins/autoLangSwitch.py

@JulienCochuyt
Copy link
Collaborator

I up-vote the need for this feature.
I'm not sure a dedicated gesture is necessary though, as I think I'd rather it just be an additional setting in the speech settings ring.

Almost of topic: NVDA currently behaves differently in matter of language switching when hovering with the mouse or reading in browse mode. Quite annoying.

@CyrilleB79
Copy link
Collaborator Author

For now speech settings ring gathers only parameters from the synthesizer. All these parameters appear before automatic language switching option in the speech setting panel and are provided by the synthesizer.
On the contrary automatic language switching is a global parameter as punctuation level is, as well as all other parameters defined below in the speech setting window.
So IMO it makes sense to have a dedicated script and not to integrate this parameter in the synth settings ring.
Other opinions welcome...

@XLTechie
Copy link
Collaborator

XLTechie commented Sep 20, 2019 via email

@CyrilleB79
Copy link
Collaborator Author

Integrating this feature in the synth settings ring seems to require modifications in each one of the synth driver. Indeed it is the synth driver which defines the settings that it supports in the ring. Moreover, the driver then implements the functions to get and set each setting in the ring. This would thus create a bunch of dependencies with synth drivers to upgrade... Really overkill for such a minor change.

@JulienCochuyt
Copy link
Collaborator

Integrating this feature in the synth settings ring seems to require modifications in each one of the synth driver.

Actually, the base class could add this setting for its derived implementations.

Indeed it is the synth driver which defines the settings that it supports in the ring.

Currently, the language detection is toggled globally, whether the chosen synth driver supports the feature or not.
Each and every driver would be impacted only if we wish to include the setting in the ring only when supported.

Moreover, the driver then implements the functions to get and set each setting in the ring.

Again, the base class can handle this.

I'm not nitpicking because I think this setting should be included in the ring.
I think it would fit there, but we are a community.
Anyway, I would advise to concentrate on the UX we wish to obtain first, and only then to tackle the technical aspects of its implementation, as Python is pretty flexible in such a matter.

@CyrilleB79
Copy link
Collaborator Author

@JulienCochuyt, thanks for your comments that are welcome and valuable.

I agree with you to concentrate mainly on the targetted user experience.
IMO the synth settings ring parameters should not be so many so that the ring remains usable. An add-on has already appeared to configure the list of the parameters in the ring in order to reduce them...

If we choose to integrate language detection in the ring, this would be the moment to really think which parameters should remain in the ring. E.g., I never modify the voice pitch on the fly and do not think many people do.
Also language detection should be integrated in the ring only if it is supported by the synthesizer (as done for rate boost for example) in order to guarantee a good user experience.

Any comment from someone else having an experience of NVDA synth framework would be welcome. I do not know so much who has looked at or worked on such topics. @feerrenrut ?

@JulienCochuyt
Copy link
Collaborator

An add-on has already appeared to configure the list of the parameters in the ring in order to reduce them...

This is actually a feature I would much appreciate to see in NVDA core.
I would even step a bit further as to allow one to eg. select which voices or languages are included for a given synth.
I'm an eSpeak user, and often switch languages. It's pretty annoying to have to remember that one is 14 steps apart from another.
VoiceOver on IOS makes for a pretty usable UX in this matter, even if one could argue it could be maybe perfected with a second level.

If we choose to integrate language detection in the ring, this would be the moment to really think which parameters should remain in the ring. E.g., I never modify the voice pitch on the fly and do not think many people do.

I agree, even though I fear it is always harder for end users to accept removal of feature than to accept additions or increased configurability.

Also language detection should be integrated in the ring only if it is supported by the synthesizer (as done for rate boost for example) in order to guarantee a good user experience.
Any comment from someone else having an experience of NVDA synth framework would be welcome. I do not know so much who has looked at or worked on such topics. @feerrenrut ?

One of the questions might be: Are there synth that do support it for some voices and not for others?
Another might be: What links a language to a voice? Eg. when more than one voice in a given synth support a given language, how curently is the voice chosen when autodetection comes into play?

@burakyuksek
Copy link
Contributor

Hello,
I think such a feature should be added in the input gestures dialog box so we can configure the shortcut ourselves.

@bhavyashah
Copy link

Vigorously seconding having an unassigned gesture for automatic language switching. I currently go NVDA+Ctrl+V > Alt+O > Tab > Spacebar > Enter, which is a bit cumbersome. I do this frequently because while learning a new language that uses the same alphabet , in my experience, I find it clarifying to read Spanish text with both their English and Spanish pronunciations.

seanbudd pushed a commit that referenced this issue May 11, 2022
Fixes #10253

Summary of the issue:
A gesture is missing to control automatic language switching and automatic dialect switching options.

Description of how this pull request fixes the issue:
Add an unassigned script that cycles through the 3 following automatic language and dialect switching modes:
- Language and dialect switching off
- Language switching on, dialect switching off
- Language and dialect switching on

Note that the 4th possible state (language switching off and dialect switching on) does not make sense.
@nvaccessAuto nvaccessAuto added this to the 2022.2 milestone May 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants