Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EditContext API #416

Closed
3 of 5 tasks
snianu opened this issue Sep 4, 2019 · 27 comments
Closed
3 of 5 tasks

EditContext API #416

snianu opened this issue Sep 4, 2019 · 27 comments
Assignees
Labels
Resolution: satisfied The TAG is satisfied with this design Topic: accessibility Topic: Input Venue: Web Apps WG W3C Web Applications Working Group

Comments

@snianu
Copy link

snianu commented Sep 4, 2019

こんにちはTAG!

I'm requesting a TAG review of:

Further details:

We'd prefer the TAG provide feedback as (please select one):

  • open issues in our GitHub repo for each point of feedback
  • open a single issue in our GitHub repo for the entire review
  • leave review feedback as a comment in this issue and @-notify [github usernames]

Please preview the issue and check that the links work before submitting. In particular, if anything links to a URL which requires authentication (e.g. Google document), please make sure anyone with the link can access the document.

¹ For background, see our explanation of how to write a good explainer.

@torgo
Copy link
Member

torgo commented Feb 25, 2020

We are reviewing today.. One question I had was about the security & privacy questionnaire. It seems like this technology would have signifigant access to sensitive information... 2.4 and maybe needs to elaborate on how it would mitigate against misuse / elaborate possible abuse cases?

@kenchris
Copy link

kenchris commented Feb 25, 2020

Dan was worried about the access to sensible information. I assume though, that this will only work with input methods shipped with the browser, like emoji picker and accessibility helpers.

This is an API for the web site author to integrate with these, and not a way for developers to create their own input methods that will then get access to sensible information such as selected text/passwords etc.

Maybe this could be made more clear early on in the explainer

@BoCupp-Microsoft
Copy link

My definition of sensitive data would be any information conveyed by the user to the site that the user didn't explicitly intend to provide.

In the case of EditContext, which deals with delivering text that has been input by the user to the active document, I expect that the user's intent is to provide the site with the text that has been typed on a keyboard, or composed in an IME, or spoken to an OS speech to text input mechanism, etc. EditContext doesn't expose any of the details of how the text was provided directly, only the resulting text and some direction as to how the text should be decorated during the process of composition.

The text data input by the user is already available through alternative means, e.g. the beforeinput event; the EditContext is just providing it in an alternate way that is decoupled from the DOM. The decoration information seen in the IDL as part of the TextFormatUpdateEvent is new, but we don't consider it sensitive.

If there is a threat to be considered for formatting information, it would be that an author may differentiate one input method from another based on the conventions that the input method has adopted for formatting its text during composition. For example, speech input may use a dotted gray underline while a Japanese IME would use a solid black underline. This formatting data is necessary for sites like Google Docs and Office Online to meet user expectations during text input, and we believe its acceptable to expose this information to web sites so that the user can input text in a way that is consistent with their experience.

Note that there are other mechanisms already that may reveal similar information about the user. One example are the heuristics by IMEs to suggest candidates for some phonetic input that are most frequently selected by the user. The first candidate after typing the phonetic input will be inserted into the DOM and visible to the author's script. While this may provide some new bit of information to fuel fingerprinting, it allows the user to have a fast and consistent input experience, which IMO outweighs the minor privacy concern.

I hope this helps. If this is the information you're looking for we're happy to include it in the explainer. If you need more or disagree with any of the points we've raised please let us know.

Thanks!

@cynthia
Copy link
Member

cynthia commented May 27, 2020

@atanassov and I discussed this during the VF2F.

From a high-level perspective, we are happy with this proposal. It's a complex, previously unaddressed problem and we are glad to see someone working on this.

I personally would like to see how this is expected to behave in corner-case scenarios, and what chain of events come out in different user scenarios across different languages (e.g. non-compositing, compositing without candidates, and compositing with candidates) to better understand if we are actually solving this problem once and for all.

As for the privacy issue noted above, we discussed this at length and concluded that this is probably a non-issue.

@cynthia
Copy link
Member

cynthia commented Sep 24, 2020

Nit discovered by @hober during F2F review: There seems to be a typo/error in Example 1 and Example 3, the former references EditView but the implementation is named EditableView.

@hober
Copy link
Contributor

hober commented Sep 24, 2020

Also, are this.computeSelectionBoundingBox() and computeSelectionBoundingRect() supposed to be the same thing? If so, what is it? Is it the bounds of the current selection, or is it the bounds of the editable area within which selection can occur?

@alice
Copy link

alice commented Sep 24, 2020

This looks promising, but it's a very complex API and the explainer is quite terse and seems to omit a lot of detail, so it's hard to review the details of the API.

For example:

Additionally, the layout bounds of selection and conceptual location of the EditContext in the view should be provided by calling updateLayout.

window.requestAnimationFrame(() => {
    editContext.updateLayout(editContainer.getBoundingClientRect(), 
                             computeSelectionBoundingRect());
});

I'm not quite sure what this is doing. Update the layout of what? Why does it take these two rectangles? Why does it need to be done asynchronously (via rAF)?

In general, it would be helpful at least if the IDL had extensive comments explaining the purpose of each enum, object and method. The code examples could also use more extensive comments to explain each call into the proposed API.

Also, it would be great to illustrate via targeted (minimal) code examples how the API solves each of the problems listed in the Real-world Examples section. That section is extremely helpful to understand the context of the API, but it's not made explicit how the API solves those problems.

@cynthia
Copy link
Member

cynthia commented Sep 24, 2020

Additionally, it's a bit unclear how this would work with RTL languages, since it involves selection and selection behaves differently there. Do you have any thoughts on this?

@cynthia cynthia added Progress: pending external feedback The TAG is waiting on response to comments/questions asked by the TAG during the review Progress: pending editor update TAG is waiting for a spec/explainer update and removed Progress: in progress labels Sep 24, 2020
@plinss plinss removed this from the 2020-09-21-F2F-Cork milestone Oct 14, 2020
@torgo torgo added this to the 2022-11-28-week milestone Nov 26, 2022
@torgo
Copy link
Member

torgo commented Nov 28, 2022

Hi @snianu I notice no updates since 8-November. Does that mean you're ready for us to re-review? Thx!

@alexkeng
Copy link

Hi @torgo, no, EditContext is not ready for re-review, in fact, the project is on hold at the moment due to resource constraints. We'll update the ticket when we re-start the project, thanks!

@torgo
Copy link
Member

torgo commented Dec 1, 2022

Ok - in that case I think we're going to go ahead and close this one. Please ping the issue or a TAG member directly when you think it's ready to be re-opened. Thanks!

@torgo torgo closed this as completed Dec 1, 2022
@torgo torgo added Progress: stalled and removed Progress: pending editor update TAG is waiting for a spec/explainer update labels Dec 1, 2022
@torgo torgo removed this from the 2022-12-12-week milestone Dec 1, 2022
@snianu
Copy link
Author

snianu commented Jun 6, 2023

@torgo @dandclark is now actively working on EditContext and has been updating the spec as well. Requesting to re-open this issue to continue TAG review for this feature. Thanks!

@dandclark
Copy link

dandclark commented Jun 6, 2023

こんにちは TAG-さん!

Apologies for the extended delay here. We've got capacity to pick up this spec again, and I'm going to take point on driving it.

Can this issue be reopened, or should I file a new issue for the review?

The spec draft for the feature has evolved quite a bit from when this was initailly opened, and some of the materials have moved around. @snianu's top post has been edited with the new links, or you can find the spec draft directly here.

To start addressing some of the last questions on this thread from @plinss, the intended behavior when an EditContext has children is that there will be a mix of behavior borrowed from contenteditable and some behavior that diverges. This section overviews these similarities and differences: https://w3c.github.io/edit-context/#edit-context-differences.

Essentially, the browser will handle caret navigation, and child content of the EditContext will inherit editability such that the user can click/arrow-key into it and move the caret around in it. Where EditContext primarily differs is that when the user tries to add or delete content via key or IME input, the browser will not modify the content automatically. Instead it will fire events against EditContext so that the page author can perform the modification as they see fit.

A primary goal of this API is to give the author a primitive to interact with OS text input services more directly. This is done via a plain text buffer since that's what the text input services use. The authors are then responsible for translating this into the view that will be presented to the user, either building it with DOM nodes or painting to a <canvas>.

@dandclark
Copy link

@torgo tagging you directly about the request to reopen since I don't know if there's much visibility for closed threads. Thanks!

@chrishtr chrishtr reopened this Jun 8, 2023
@chrishtr
Copy link

chrishtr commented Jun 8, 2023

(I went ahead and reopened, hope that's ok.)

@torgo
Copy link
Member

torgo commented Aug 28, 2023

@torgo torgo modified the milestones: 2023-08-28-week, 2023-09-04-week Sep 3, 2023
@torgo torgo modified the milestones: 2023-09-04-week, 2023-10-09-week Oct 8, 2023
@plinss
Copy link
Member

plinss commented Oct 19, 2023

Thanks for the updates, at this point we're going to close as satisfied. I have a few personal concerns about usability, but I think experience working with the API will be the determining factor there. My primary concern was avoiding under-specified behavior and that seems to be addressed now. Thanks for flying TAG.

@plinss plinss closed this as completed Oct 19, 2023
@plinss plinss added Resolution: satisfied The TAG is satisfied with this design and removed Progress: stalled labels Oct 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Resolution: satisfied The TAG is satisfied with this design Topic: accessibility Topic: Input Venue: Web Apps WG W3C Web Applications Working Group
Projects
None yet
Development

No branches or pull requests