-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Focus navigation and other features for accessibility? #4122
Comments
Hello Ethin, Thanks for your detailed message. Tab-based focus navigationCurrently tabbing only go through certain fields (those which can be turned into text input: InputText, SliderFloat, DragFloat etc..). There is a plan to rework this so it goes through all fields (some work has been done as recently as #4079 (comment), I had more work done but never finished it, faced some technical issues back a few years ago when I tried). Anyhow I think it will eventually happen. Intercepting focus events it not possible yet but should be possible to add into the codebase. We would need to know/understand exactly what data would be useful to you and they would be pipped to the screen reader in order to understand how to design the platform/backend API to provide this info. If you were to be keen to investigate dear imgui further, it would be reasonable to work together on drafting something.
From dear imgui point of view, it won't be widgets-specific (as in, not tied to "tables, menu bars etc.). so it seems like it all get bundled into the same "intercept focus events" ?
Ditto, would be good to know precisely what information you would need to leverage ithis. |
Hi there! I know this has been open for over a year now. I was wondering if anything has happened to the improvement for imgui or how you spell that. I am also a blind use who uses a screen reader and speech synthesizer to read text on screen. When i try to view gui elements, my speech synthesizer can't read any text on screen. I try tabbing around, but hear nothing at all. |
I am wondering if any improvement has occurred. I shall attempt to address the notes of @ocornut:
In general, the more information you can provide when it comes to widget events and information, the better. It is perfectly acceptable if the UI library is extremely verbose in terms of the data that client software can acquire about a widget. It is up to client software to determine what information it wants to use and discard. Of course, none of this matters if you want to implement OS accessibility interfaces, but this is most certainly not an easy process, particularly given that OSes like Linux have no de facto accessibility interface to begin with, and the ones that you might assume are de facto don't have very good documentation. Windows would be easier, if only because you only need to integrate UI automation and your good to go. MacOS will pose a problem, however; Apple is notoriously tight-lipped about their accessibility interfaces, and I'm unsure if it's even documented anywhere. I hope that I was able to answer your questions though. |
Hi! |
Thanks everyone for your input here. From Dear ImGui point of view, implementing a system that submit contents on specific events/actions is possible, but there are likely going to be several edge cases where providing accurate/precise enough information may be difficult due to lack of context. As in, dear imgui itself rarely have enough of context at its disposal, and it may occasionally requires non-trivial digging or systems to retrieve the desirable information. For example: combo boxes, list boxes, menus are more or less general windows where any type of items can be submitted. From inside a specific item code, they generally don't care or know where they are. But that information could be retrieved somehow. (There is also generally pressure that by nature everything in dear imgui needs to be implemented in a very optimal manner (in terms of CPU usage), but I am slightly less worried about that because I can always help to optimize the code. But it bounds to note this is a severe requirements that requires non-trivial coding discipline. I'll help.) I think we'd need to focus on first steps. Not being a user of screen readers I am unlikely to be able to do it myself, but if someones wants to fork and do experiments I can chat with them and provide assistance to move forward. |
Hi! |
Dear ImGui is a tech/software library used to display UI elements such as buttons, sliders etc. Nowadays people have been increasingly using it for user-facing tools and are expecting new things out of it that are largely out of its initial scope, but make sense with the increased adoption and user-base. We don't have any simple/easy answer to your question other than stating that screen readers (unless some are relying on OCR?) currently won't work with GUI created with dear imgui. But discussions and work here may lead to improvements on that front. I would be happy to help an experienced programmer who want to tackle this and help move the needle forward. |
I see. The only screen reader that i know have OCR is nvda. I have tried to use that in some cases, but not had the best experiences. |
Version/Branch of Dear ImGui:
Version: latest
Branch: master
Back-end/Renderer/Compiler/OS
Back-ends: imgui_impl_opengl3.cpp
Compiler: XXX
Operating System: XXX
My Issue/Question:
I am writing an application for a university project and I'm considering using IMGUI. However, I am also visually impaired, and therefore rely on a screen reader and speech synthesis to interact with computer programs. A screen reader is not necessary, however, if the application I'm using has built-in TTS support -- then that application becomes, for the time being, the screen reader; this is called "self-voicing".
In order for a screen reader of any kind, even a self-voicing application, to be considered "reasonably accessible" to people with disabilities, some invariants need to hold:
The above three invariants are minimum baseline requirements for an application to be made accessible with speech synthesis in a way that allows comfortable user interactivity. Does IMGUI allow these changes to be detected in some manner? I could not find it within the demo or examples. I did find an issue about tab navigation, but nothing about this particular problem. However, I do not want to make a false assumption when I may have overlooked something.
The text was updated successfully, but these errors were encountered: