-
Notifications
You must be signed in to change notification settings - Fork 375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support >>>
combinator in static profile
#78
Comments
This seems like quite a basic affordance, without which there is an unbridgeable gap between the Chrome and Firefox implementations (utilise/all#1 | doc). We rely quite heavily on this ability. It looks like everyone is on the same page too (i.e. good in JS, bad in CSS). Is there any chance |
I renamed the subject of this issue to more appropriate one. |
AFAIK,
Please feel free to correct me if I am wrong. |
Thanks, @hayatoito. Are there any more updates on this issue? For anyone opposing this: is there a sensible alternative solution/proposal (perhaps something like |
What are use cases that require walking across all those shadow trees? |
Essentially, we have a small core that stores (browser-like) resources and emits an event when they change. ripple(name) // getter
ripple(name, body) // setter
ripple.on('change', (name[, change]) => {}) Other modules like the view layer simply listens for changes and updates anything on the page that is affected. For example, components are simply transformation functions of data, so registering a new version of a component would redraw all instances of that custom element. This amounts to the following: ripple.on('change', name => document.querySelectorAll(name).map(d => d.draw())) Other types of resources may also change, such as stylesheets or data, and the view layer would use different selectors ( Now, since there may be relevant custom elements to update in the shadows of other custom elements (majority case), instead of Being able to use |
But anyone could add a new instance of a custom element dynamically to any shadow tree so you'd end up continuously monitoring mutations to every shadow tree. This is extremely inefficient. Just walking the entire composed tree to find all instances of a given custom element alone is too expensive. If you're concerned about tracking all instances of custom elements, wrapping custom elements' lifecycle callbacks might be a better approach (i.e. track lifecycle callbacks in your framework, and then delegate back to each custom element later). |
There is no need to watch for mutations. You just call The lifecycle callbacks are wrapped, which is what allows for the dynamic registry of components in the first place. The problem is not with the lifecycle callbacks however. It's when something else changes, like receiving new data/stylesheets/components from the server, and then you need to find the elements to rerender. If you are suggesting to use the {attached,detached}Callbacks to build some sort of internal map and keep track of every instance, then that is exactly what would be nice to avoid and rely on |
I'm saying that providing such an API is undesirable because looking for all elements in the composed tree with A better approach would be for each component to keep track of their subcomponents within its shadow tree (e.g. at construction time), and recursively call |
I understand, but either way we are searching for instances of elements across the composed tree. The suggestion to implement this in JS would be (a) orders of magnitude more expensive/inefficient than if done natively (b) a lot of unnecessary management overhead in all hot code paths (c) hard to do reliably (d) more assumptions, worse for interoperability and the different contexts components can work in (e) requires redrawing everything from the top-level component down for every change. One of the reasons it's currently so fast (2x optimised React on DBMonster) is because (a) there are no intermediate data structures to manage and (b) we can jump immediately to a lower part of the tree and then recursively draw from there, rather than always starting right at the top. This solution is also more in the spirit of progressive enhancement (in contrast to e.g. extending MutationObservers) and even if every framework built their own shadow-dom-walker-tracker (itself undesirable), being able to easily walk shadows is a more generally common use case, like the testing issue in the original thread, hence why it makes sense to favour the more generic solution. |
The whole point of shadow DOM API is to provide encapsulation. If your app needs to constantly break that encapsulation, then it's probably better not to use shadow DOM API in the first place. In terms of testing, I don't think reaching into shadow trees of a component while testing a web page that uses a given component is a common scenario. You'd typically unit-test each component separately, and the integration test would be testing the page using public API of each component. In fact, I'd argue it's an anti-pattern. For example, if a library author published a component and various websites that use that component started poking into its shadow tree in websites' tests then the library author may no longer be able to change its shadow tree structure at his/her will because those changes may break those websites. |
I expected this more ideological argument and hence tried to sidestep (with "at least open"). Let's clarify two general use cases for Web Components: (1) Componentisation of applications: For example, using a fractal architecture where each component composes other subcomponents structured in the same way (2) Embedding third-party leaf components (e.g. google-maps, facebook-like, twitter-counts, etc) Open shadows are suitable for the former and closed shadows for the latter (completely bullet-proof encapsulation). You still want to be able to benefit from things like style encapsulation, but there is more involved with application-level components, like state management. It appears you are being dismissive of the former use case, in which case Web Components becomes relegated to just being a poor man's iframe (iframe is arguably better for the latter use case, since they are far more "strongly encapsulated": e.g. separate component registries, @font-face, etc). For testing, you are again extrapolating your conclusion from one narrow case: I agree relying on the implementation of a third-party component would be brittle. But this isn't the only use case. It's reasonable to dispatch an event in one part of the application and make assertions about what changed on the other side. The view that everything should be controllable and observable from the top-level component's public API of the component is obviously impractical and brittle (every subcomponent has to proxy everything). Hence I think it is worth first explicitly clarifying your view on open shadows and the former use case. Is the design goal of shadows to simply maximise encapsulation so that they are not useful throughout an application and are only useful as third-party leaf nodes? If yes, then open shadows should be dropped and this purpose made more clear. If not, then the need for some sensible way to recursively walk shadows (until hitting a closed one) becomes evident and we can go back to discussing whether APIs like this are more suitable in JS or native. |
I've been using shadow DOM to write our performance tracking tool, and many of "components" have to track dozens of states. Yet, I've never found it useful having to reach into some other component's shadow tree in order to get access to those states. In practice, all those states are stored in some sort of JavaScript objects and properties, and components should be exposing some public API to access those states.
In such tests, there should be a test that verifies a given component dispatches an event, and a separate test which verifies that the given component responds to the event. You could even manually inspect those components' shadow trees during testing but that doesn't require anything akin to
There is
I disagree with your premise that maximizing encapsulation is not useful for components within an application given my experience of having successfully writing a web app that's made up of dozens of closed shadow tree components. Having said that, Apple's position has always been that open mode of shadow DOM is a mistake. The only reason we have "open mode" shadow tree is because we made a compromise to keep both modes without a default during April 2015 F2F meeting for the sake of moving shadow DOM API as a whole forward. |
I'm not sure what exactly your perf tools do: I agree that explicitly proxying and transforming state is a good thing. However, it's not the only use case. Having built multiple apps in this manner, the issue is that this approach does not scale. When you have a component 10 layers deep and you need to change it's requirement, every component in the path in between now has to explicitly proxy the new data down, which makes this approach very brittle. This is not about "inter-component communication" (?). Only events are used for that in a "data down, actions up" paradigm. This is about either a deep or recursive API being necessary for the framework layer to manage updates. It might be beneficial to read more around things like Redux + React (but as aforementioned, it's not strictly just data, see for example Om Next (30:36 onwards) where they maintain a live index of all instances on the page). The
It appears you are erasing the problem here by suggesting to abandon integration tests for unit tests (two separate unit tests)? Testing end-to-end flows in an application happens across components. The introspection required by the testing layer, similar to the framework layer, transcends the component tree and it can't be expected to use the public API of components for this purpose. It's good to know Apple is opposed to open shadows. However, I think going forward, unless your goal is to sabotage the usability of open shadows, it's better to not mix the two problems. There's very little reason to oppose this hook for open shadows (especially given existence of upward piercing), and once you all work out how to make closed shadows more usable, then you can discuss deprecating open shadows altogether. |
Oh I see. This is exactly one weakness I see with the current web components as well. But I don't think exposing shadow roots is the right solution for this. What we need is an event-like system that propagates things downwards instead of upwards. e.g. |
^ Both are great alternatives I haven't thought about before 👍. If you have a small example of how their API's might look, I can try and give some feedback from the perspective of my use cases. |
Sorry, I don't have a time to design new API like that at the moment but I'm more than happy to revisit this in a couple of months. |
Let me continue discussion on this thread, here's the summary so far. The original proposal was, to allow '>>>' (shadow-piercing combinator) in the static profile only. Chrome had The idea was to limit this combinator for querySelector/querySelectorAll, which runs one-off. The answer to theme-like styling as of now is to use CSS custom properties, but no good solutions for JS other than recursively applying querySelector on each shadow root (example). The concerns so far are
If Shadow DOM is closed mode-only, this combinator of course doesn't make sense, but for open shadow trees, providing a way to select node in shadow tree should make sense. One alternative idea from @annevk was to have |
I think the current situation is like when we had to include jQuery as platform didn't provide querySelector, so wouldn't it be nice if the platform provides the API to select nodes deep into open Shadow DOM, without having the recursive polyfill. Any comments and opinions are welcome! |
Please remove "breaks encapsulation of Shadow DOM" from the concerns because An open shadow root already provides The point is that
|
"The concerns so far" is the summary in this thread, and IIUC the main point that Apple opposes having this is breaking the encapsulation, as in #78 (comment). Thanks for the clarification. |
Yeah, I would like to clarify that The point is that having this syntax sugar in the Web Platform is worth while or not. |
Yes, @hayatoito is exactly right. |
Given we have a fundamental disagreement with the usefulness of For example, propagating Rect-render-like states to subcomponents is NOT an appropriate use case for Writing integration tests for a Web app consisting of hundreds of components is another use case that came up. I'd argue that such an integration test should not be relying on internal states of each component. Instead, they should be testing based on each component's public interface. However, if authors really wanted to do white-box testing across components, they could simply write an equivalent of |
Look at any non-WC page that searches for elements in the page with querySelector/All(), for any reason. Now, imagine they switched to make heavy use of WC, such that there was a nice hierarchy of shadow trees. Boom, that's a use-case. (Unless you can reasonably argue that, in the new version of the page, they'll have no need to search for elements from the top level of the page anymore, and can rely on specific elements to do all the searching locally within their own tree. That would be a difficult argument to make, I think.) And as a reminder, not having And the alternative API proposal (querySelectorAgainstFlatTree) is weaker than |
A use case is a concrete scenario / story. Imagining any page that doesn't use web components that use querySelectorAll to start using web components is too abstract and generic to be called as a use case. As a counter example, I have a website (perf.webkit.org) with ~20k lines of JS/HTML/CSS code which didn't used to use web components and used querySelector and querySelectorAll. I've successfully switched to start using shadow DOM & custom elements without ever needing to find elements across shadow boundaries with querySelector or querySelectorAll. As far as I can tell, using Even for testing, the need for querySelectorAll never came up because we could just use JS functions to find the right element to do the necessary white box testing.
This is precisely why we like that approach better. The biggest problem we have with the proposed API is the we'd have to introduce a very complicated shadow boundary transitioning code into our selector matching engine. |
Can you give me an example? The most compelling use case I've heard so far for needing a shadow piercing querySelector is for testing. Did you instead implement your own tree walking function? |
Basically, we decided not to write tests that pierce across shadow boundaries. We wrote unit tests for each component where we reach out into each shadow tree and check its state, and once we've done that, we never reached into shadow trees in our integration tests. This is how builtin HTML elements work by the way. As a web developer, you never test internal implementation details of builtin HTML elements. The best you can do is to check its publicly exposed states, and that's exactly what we do with our components, and I'd say it's working fairly well for us. |
That falls apart when you have interactive elements in shadow roots (like Having worked with this a lot (and being the author of a very popular gist on this subject), I don't want the shadow piercing combinator. What I do want is a selector that only works for testing that selects from the current element's shadow root:
However most of this could be easily overcome by syntactic sugar or special methods in testing frameworks. |
What about the polymer way? this.shadowRoot.querySelector()
… Il giorno 08 ott 2017, alle ore 22:11, Chad Killingsworth ***@***.***> ha scritto:
Basically, we decided not to write tests that pierce across shadow boundaries.
That falls apart when you have interactive elements in shadow roots (like <button>). You end up writing public accessors just to test the component.
Having worked with this a lot (and being the author of a very popular gist on this subject), I don't want the shadow piercing combinator. What I do want is a selector that only works for testing that selects from the current element's shadow root:
document.querySelector('custom-element => button');
// Must be currently written as:
document.querySelector('custom-element').shadowRoot.querySelector('button');
However most of this could be easily overcome by syntactic sugar or special methods in testing frameworks.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
I don't follow. For testing interactive elements, we do white box testing by exposing its shadow tree. There's no need for a component that uses another interactive components to reach into its shadow tree.
That should already work. |
I disagree. For full integration tests, I need to click and interact with elements just like the user would. I frequently have form elements, anchors and other interactive elements inside a shadow tree. The alternative is using selenium to click on specific offsets in the viewport - and that is extremely fragile.
It does - but it's very verbose and difficult to work with. Authors want the convenience that the shadow piercing combinator provided. This problem really only exists because selenium locates elements based on query selectors. It's cumbersome to use shadow dom in that environment. |
But you wouldn't do this with builtin elements. You wouldn't reach into video element's controls and start messing with their buttons because you trust that video element is implemented & tested correctly on its isolation.
Since what you want in your use case can be easily built as a shim on top of the existing capability, I still don't see why this needs to be implemented in the browser. |
True - but again that doesn't scale up to the way web components are used today. This is due to two base reasons:
<my-app>
#shadowRoot
<form>
<name-fields>
#shadowRoot
<input type="text" name="fullname">
</name-fields>
<form-submission></form-submission>
<form>
</my-app> Take the above example. How do you test the form submission works? This is of course a contrived example, but the answer shouldn't be "don't do that". Having distinct grouping of input fields is handy to share between several forms. As you continue to compose custom elements together, how else do your properly test the combined total? How do you test |
The form submission issue is tracked by #187, which we're intending to discuss at W3C TPAC this year.
Even in that scenario, piercing across shadow boundaries isn't really a good way to write tests. When the implementation details of each component changes, you'd be forced to update all those integration tests. A better way to accomplish this would be relying on public APIs of each component to test, and make sure each component has a separate unit tests for public API updating its internal states correctly. If there isn't adequate API support for writing whatever tests, then you can add test/debug only API which exposes necessary states/information. We do this in WebKit's C++ code by exposing extra JS objects which lets tests expose & manipulate internal states of WebKit even though doing so in production/user environment won't be okay due to security concerns. |
This is exactly why a special selector would be nice for selenium only. I don't think adding it in general to the static profile is warranted at all and would just encourage bad things. |
Okay. That more or less matches our expectation. Does something like the API I proposed in #78 (comment) work for you? Instead of having an explicit |
For YouTube, the benefit of shadow DOM API is encapsulation of styling as a default. There are still benefits for global styling and the ability to select elements through shadow boundaries.
Is Also, does this API select for trees within a shadow root? or does it only match an element in isolation? Like, are queries like |
Our API currently hangs off of the global object and you have to pass in the
|
Oh, wow, that's a much more significant restriction in power than I thought you were talking about. |
We have a flattened tree querySelectorAll implementation in axe-core that
we could abstract out into a separate module if you would find it useful
https://github.com/dequelabs/axe-core
This implementation does in fact select across shadow root boundaries but
has other implications in that it will not select elements in the light DOM
that are assigned into slots using just a light DOM selector, you have to
use a selector that resolves into the slot
…--Dylan
On Tue, Oct 10, 2017 at 1:51 PM, zhaoz ***@***.***> wrote:
The whole point of shadow DOM API is to provide encapsulation. If your app
needs to constantly break that encapsulation, then it's probably better not
to use shadow DOM API in the first place.
For YouTube, the benefit of shadow DOM API is encapsulation of styling as
a default. There are still benefits for global styling and the ability to
select elements through shadow boundaries.
That said, focusing on the dynamic profile, I do agree that this is mostly
useful for testing and debugging. This is the majority (and possibly only)
use case that YouTube has.
We’re experimenting with collectMatchingElementsInFlatTree (implemented
in https://trac.webkit.org/changeset/208878) which finds all elements
that match a given selector in each tree instead of an explicit >>> to
cross a shadow boundary
Is collectMatchingElementsInFlatTree on the element prototype? E.g.
allowing for this to work on a subtree? I want to be able to do something
like someDiv.collectMatchingElementsInFlatTree().
Also, does this API select for trees *within* a shadow root? or does it
only match an element in isolation? Like, are queries like #main x-foo
valid if both x-foo and #main were in the same shadowRoot?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#78 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AJeKsRO9uv4d-pdodr1UiELSoav2fAiSks5sq66ggaJpZM4EoeE4>
.
--
Download the aXe browser extension for free:
Firefox: https://addons.mozilla.org/en-US/firefox/addon/axe-devtools
Chrome:
https://chrome.google.com/webstore/detail/axe/lhdoppojpmngadmnindnejefpokejbdd?hl=en-US
Life is ten percent what happens to you and ninety percent how you respond
to it. - Lou Holtz
|
Again, we're not interested in implementing |
I agree and have said as much upstream in w3c/csswg-drafts#640 (comment). I suggest that we continue the discussion there as this affects a document maintained by the CSS WG. I'll add that document to https://github.com/w3c/webcomponents/blob/gh-pages/README.md so it's easier to find from this repository going forward. |
As of Chrome 89.0.4357, using /deep/ in querySelectorAll() now throws an exception. I can't readily tell from this thread whether a replacement was ever created. (My scenario was a browser extension called moarTLS: I want to scan every in the page, whether in a shadow or not, to determine whether its href points to HTTPS or HTTP.) |
There is no cross-browser / standard replacement. |
Just to confirm, Shadow DOM v0 and Custom Elements v0 have both been removed from Chromium in v89+. And that includes the (long deprecated) |
Well, there is no direct replacement for Sebastian |
There are JS based userland solutions for this: https://github.com/Georgegriff/query-selector-shadow-dom Similar approaches are also implemented in testing tools, like puppeteer and webdriver IO. |
Title: [Shadow]: Figure out a good replacement for /deep/ in testing scenarios (bugzilla: 28591)
Migrated from: https://www.w3.org/Bugs/Public/show_bug.cgi?id=28591
comment: 0
comment_url: https://www.w3.org/Bugs/Public/show_bug.cgi?id=28591#c0
Dimitri Glazkov wrote on 2015-05-01 17:22:53 +0000.
One thing that immediately popped up once we started talking about removing shadow-piercing combinators is that WebDriver-based tests are frequently interested in reaching into shadow trees to grab a specific element to test:
https://code.google.com/p/chromium/codesearch#chromium/src/chrome/test/chromedriver/test/run_py_tests.py&sq=package:chromium&q=testShadowDom*&l=877
Web Driver actually has something currently that attempts to solve the problem: http://www.w3.org/TR/webdriver/#switching-to-hosted-shadow-doms
However, the feedback I got from ChromeDriver folks is that it's way too verbose and awkward to use for the most common case (see above).
Maybe the solution is just specifying deepQuerySelector for WebDriver spec. However, I want to make sure this is not just a one-off -- seems like this could be needed in other contexts.
comment: 1
comment_url: https://www.w3.org/Bugs/Public/show_bug.cgi?id=28591#c1
Anne wrote on 2015-05-02 07:34:03 +0000.
We shouldn't do features for testing. That's bad.
I remain convinced that in the open case we should provide a myriad of features that cross the "deep" to aid with selection, event delegation, composition, etc.
comment: 2
comment_url: https://www.w3.org/Bugs/Public/show_bug.cgi?id=28591#c2
Elliott Sprehn wrote on 2015-05-03 00:41:03 +0000.
(In reply to Anne from comment #1)
+1, we should keep /deep/ in the static profile for querySelector. Before we had it authors kept rolling their own (we saw this on multiple occasions).
comment: 3
comment_url: https://www.w3.org/Bugs/Public/show_bug.cgi?id=28591#c3
Anne wrote on 2015-05-04 06:12:37 +0000.
Note that an alternative is that we introduce .deepQuery() or some such.
comment: 4
comment_url: https://www.w3.org/Bugs/Public/show_bug.cgi?id=28591#c4
Elliott Sprehn wrote on 2015-05-04 06:21:02 +0000.
(In reply to Anne from comment #3)
deepQuery is not enough, you don't want to match a descendant selector across a ShadowRoot boundary since ".a .b" means something really different. You'd still need a special combinator to signal where the scope crossing should be in the selector expression.
ex.
.panel .image
All images inside panels contained in a single scope.
.panel /deep/ .image
All images anywhere below a panel, even if they're inside a nested widget.
This is important because it maintains the "don't accidentally cross a boundary" principle.
We need something like ::shadow as well.
comment: 5
comment_url: https://www.w3.org/Bugs/Public/show_bug.cgi?id=28591#c5
Tab Atkins Jr. wrote on 2015-05-05 00:12:16 +0000.
(In reply to Elliott Sprehn from comment #4)
Yeah, trying to move the shadow-crossing quality to the core of the method doesn't work. It's much less flexible, as you note, and doesn't compose with anything else similar. The correct approach is to just embrace the "static profile" of selectors http://dev.w3.org/csswg/selectors/#static-profile and leave /deep/ there. (Or >>>, as it's now called.)
comment: 6
comment_url: https://www.w3.org/Bugs/Public/show_bug.cgi?id=28591#c6
Hayato Ito wrote on 2015-05-07 08:43:56 +0000.
(In reply to Tab Atkins Jr. from comment #5)
Is there any existing clients who use static-profile?
Does it mean '/deep/' can be used in particular APIs?
comment: 7
comment_url: https://www.w3.org/Bugs/Public/show_bug.cgi?id=28591#c7
Tab Atkins Jr. wrote on 2015-05-07 15:55:04 +0000.
(In reply to Hayato Ito from comment #6)
It's for querySelector()/etc.
comment: 8
comment_url: https://www.w3.org/Bugs/Public/show_bug.cgi?id=28591#c8
Hayato Ito wrote on 2015-05-08 02:25:11 +0000.
(In reply to Tab Atkins Jr. from comment #7)
Thanks.
Can everyone agree that '/deep/' is okay to be used in querySelector()?
I think we are assuming that adding something to static profile is zero-overhead to the performance of dynamic profile.
comment: 9
comment_url: https://www.w3.org/Bugs/Public/show_bug.cgi?id=28591#c9
Tab Atkins Jr. wrote on 2015-05-08 17:29:16 +0000.
(In reply to Hayato Ito from comment #8)
Correct. At worst, it's a check during grammar verification, to note that this isn't valid in the current context and so the selector should be considered grammar-violating.
The text was updated successfully, but these errors were encountered: