Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accessibility for the visually impaired #913

Open
wez opened this issue Jun 30, 2021 · 15 comments
Open

Accessibility for the visually impaired #913

wez opened this issue Jun 30, 2021 · 15 comments
Labels
enhancement New feature or request

Comments

@wez
Copy link
Owner

wez commented Jun 30, 2021

My intent with this issue is to understand better what first-class support for the visually impaired looks like in a terminal emulator, and then figure out how to implement it.

My reading so far has found:

A discussion with Didier of the slint project gave me some leads:

Required Features

  • For low-vision users, it may be sufficient to have a quick and easy way to activate a color scheme with high contrast and large text
  • Some kind of screen reader and/or braille display support

Screen Reader

Based on my reading of the TDSR and emacspeak documentation, it seems like a key feature of a good terminal screen reading experience is being able to manage what is being read. I think of the terminal as having a few distinct interface elements:

  • The scrollback, which is the entirety of the text that is navigable/viewable
  • The viewport, which is the section of the scrollback that is visible in the window. This defines a relatively coarse visual "cursor" representing what the user is looking at.
  • The mouse cursor, which is used for selecting and interacting with text
  • The text cursor, which is used primarily by the application

WezTerm has CopyMode and Quick Select Mode for mouse-less selection capabilities.

For a screen reader, I think it may make sense to introduce an explicit read cursor that is conceptually similar to the viewport but much finer grained; based on the TDSR docs, it seems desirable to be able to specify the position based on character, word or line. To me, controlling this feels similar to the navigation functions available in mouseless copy mode, except that the data is read aloud instead of being copied.

In a situation where the rate of output is high, it seems like it might be useful to have the read cursor move independently from the viewport so that it can lag behind the output. Having a notification to inform the user that the output rate is high seems like it might be valuable; eg: "NOTE: one megabyte of text has been output in the past ten seconds", or "The next line is no longer in the scrollback" if it got scrolled away. Notifications will need to trigger some audio cue (perhaps an adjustment to the speed/tone parameters) to disambiguate the notification from what was being read.

Part of support this in wezterm is understanding how to model and interact with the read cursor, but another part is actually having the text get read out. WezTerm runs on multiple platforms so there are a number of possibilities.

Targeting Speech Dispatcher seems like a good, portable, first step, and seems like there is a pretty straightforward way to address and communicate with the speech server.

Braille Displays

I don't fully understand these, but am working so far under the assumption that they can be modeled similarly to a screen reader, except that instead of (or perhaps in addition to) speaking the output, the output is sent to the braille display. It seems likely that the read cursor concept touched on above can map to both screen readers and braille displays.

@wez wez added the enhancement New feature or request label Jun 30, 2021
@follower
Copy link
Contributor

follower commented Oct 2, 2021

Appreciate seeing a desire to increase the accessibility of wezterm.

Some quick thoughts:

Hope this helps to move the accessibility process forward a little. :)

@ChrisJefferson
Copy link

This is just a drive-by message, but a blind friend of mine says their best experience comes from using Apple's built in accessibility with the standard Apple terminal.

@mwcampbell
Copy link

@wez You mentioned in #912 that Slint is looking for a more accessible terminal for the installation process. I'm following up here since this issue is more relevant to that discussion. Can you point to a page or post where they've talked about this? I wonder what they find lacking in the current solutions, Speakup and brltty.

There are basically two ways you could approach accessibility in this project: implement the platform accessibility APIs (UI Automation on Windows, NSAccessibility on Mac, and AT-SPI on Unix desktops) so screen readers can find out what's in the window and present it as they see fit, or add text-to-speech output directly in wezterm. Your interest in running wezterm directly on the framebuffer, without using X or Wayland, never mind something like GNOME, suggests to me that you're interested in the latter approach. Is that correct? I doubt that that's something we really need, though as I said, I'd like to know what the Slint project is looking for in this area.

@ChrisJefferson I'm surprised that your blind friend had such a positive experience using macOS's Terminal app with VoiceOver. If I remember correctly, tdsr was written specifically to improve access to the terminal on macOS.

@wez
Copy link
Owner Author

wez commented Dec 29, 2021

@mwcampbell per tspivey/tdsr#19 (comment), I reached out to Didier from the Slint project on libera.chat. It's possible they have an archive from that channel that you can search for the full context, but summarizing it here: the scenario they described to me was for an accessible terminal to be usable during installation of the distribution itself, running against the linux framebuffer console. That conversation was a bit more about low-vision users than it was about screen readers: large fonts, high contrast.

In terms of what I'd like to implement: I think it would be good to integrate with the various platform accessibility APIs, but the limited set of people I've interacted with about this were generally negative about the platform provided features. If the sort of people that are in the intersection of being terminal users and having accessibility requirements are not well-served by the platform provided features, then I'm not excited to implement support for each platform and would rather target something that is more impactful for that group of users. The impression I had was that targeting Speech Dispatcher might be something of a sweet spot.

@mwcampbell
Copy link

@wez Thanks for clarifying what the Slint team wants.

It's possible that once my AccessKit project matures some more, it could help you implement both approaches to accessibility, without duplicating work on your end. The central concept of AccessKit is an accessibility tree. For each distinct frame, the application (or its GUI toolkit) pushes either a full tree snapshot or an incremental tree update to an AccessKit consumer. Usually the consumer is a platform adapter, which implements an accessibility API such as UI Automation or AT-SPI using the accessibility tree. The Windows platform adapter is the most mature at this point, though work has started on Mac and AT-SPI adapters. None of these adapters support multi-line text widgets yet, and I haven't finalized the representation of these widgets in the tree, so AccessKit isn't ready yet for wezterm to use. But another thing I've been thinking about, in addition to these platform adapters, is implementing a screen reader as an AccessKit consumer, probably using the tts crate. One could just as well output to a braille display using BrlAPI. So when AccessKit is ready, you could implement support for it in wezterm, then offer a choice of accessibility solutions.

@wez
Copy link
Owner Author

wez commented Dec 29, 2021

AccessKit looks interesting! I'd definitely be interested in integrating it in wezterm once we're both a little further along; I'm considering some internal changes in wezterm that will make it easier to translate its content to the accessibility tree.

@TheQuinbox
Copy link

@wez, Any updates on this? I definitely want to try out wezterm :)

@wez
Copy link
Owner Author

wez commented Jun 25, 2022

@TheQuinbox sorry, no progress yet. There's still some work needed to the internals to get data into the right shape; that work is needed to help with some rendering performance/caching changes, so it will happen, but hasn't yet.

Then: figure out how to plumb it into something that is actually useful for users such as yourself.

Would you mind sharing details about which platform(s) you use?

@mwcampbell
Copy link

And of course, everyone's waiting for me to get back to work on AccessKit. Hopefully that will start in the next few weeks.

@TheQuinbox
Copy link

@wez, I use Windows 10 with the NVDA screen reader, and macOS Ventura with TDSR/VoiceOver, and i can tell you, tDSR does it exactly how I want it to work

@smythp
Copy link

smythp commented Aug 6, 2024

Came across this thread and was wondering if there any any updates. Thanks for looking into this!

@mwcampbell
Copy link

Sorry, forgot to update this issue with relevant info on the status of AccessKit. Basically, I think AccessKit is now complete enough, across all three desktop platforms, that it could be used to make a terminal accessible, at least in theory. I don't currently have time to work on this myself though. For anyone who does want to work on this, I'd suggest looking at how egui uses AccessKit, and particularly how it exposes detailed information about text in crates/egui/src/text_selection/accesskit_text.rs. Basically, at minimum, you'd need a root node with a role of Window, a child node with a role of Terminal, and under that, a child with a role of InlineTextBox for each line of text.

@smythp
Copy link

smythp commented Aug 7, 2024

Great news, thank you for the update.

@DataTriny
Copy link

I've started implementing AccessKit inside the base window crate in #5995 but quickly ran into issues.

@wez I'd appreciate input from you: how do you see the window crate evolve to accomodate this use case? Please note that the requirements as listed in the PR are mostly not specific to AccessKit but rather how the accessibility stack of each platform expects apps to behave.

Thanks.

@wez
Copy link
Owner Author

wez commented Sep 22, 2024

Commented on the PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

7 participants