-
Notifications
You must be signed in to change notification settings - Fork 674
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[css-nesting-1] & representing parent elements vs parent selector #8310
Comments
Yes, I've had use-cases for both selectors – but the desire to 'scope the outer selector entirely' ( I am curious how the css-nesting polyfills in other processors have handled this. Were those polyfills designed with the current spec behavior, and has that resulted in surprising selectors for people? (While I'm curious about that, I don't think it necessarily tells us if the pattern is safe. In a lot of cases, I would expect the DOM will only have |
I am a bit surprised to see this come up here :) I've reached out on the Sass repo to bring up exactly this difference : sass/sass#3030 (comment) I've also brought this up each time someone advertised Option 3 as a syntax option that is like Sass and would be simple to migrate. It is not simple exactly because of this difference.
In We have heuristics to mimic what We've learned that people do not (yet) have a good mental model for Especially people coming from Sass have greater difficulty switching to "native nesting". Those without a prior mental model seem to intuitively figure out what works and how to use nesting. |
How is Seems to me this happens when the parent and nested selector are complex, and the latter uses |
You're right, it does change. Specifically, .a .b , .c .d {
& + & { color: blue; }
} becomes .a .b + .a .b, .a .b + .c .d, .c .d + .a .b, .c .d + .c .d {
color: blue;
} in Sass. |
In Sass it is also perfectly fine to use pseudo elements in the parent : ::before, ::after {
color: cyan;
@media (min-width: 300px) {
color: lime;
}
&:hover {
color: pink;
}
} Not having this feature seems problematic for many. |
My thoughts are:
|
As Oriol noted, a consequence of resolving I take this example on purpose because if On the other hand (also noted by Oriol), I am not sure whether |
Given that Sass has been shipping with this exact behavior for years, and they pay a larger cost than browsers would (Sass has to ship the expanded results over the wire, rather than just representing it in memory), I don't believe this fear is warranted. (It does make the need for a minimum/maximum spec more relevant, as in #2881.)
This is true. But note that "just wrap it in
Sass's particular parsing/emitting rules disallow So no,
Right, that's the question - is the expectation formed by Sass authors the natural expectation, such that we can expect fresh CSS authors to do the same, and be surprised when they write HTML that violates that expectation but still matches? (Like markup options 1 and 3 in my original post.) I think that it's a somewhat reasonable expectation. And again, since it's quite easy to regain the current spec's behavior (by using
This definitely would not be. The
Yes, this would make pseudo-elements in the parent usable again. |
Summarizing some pro/cons of each: The Current Spec
The Sass-ish Proposal
|
Correct me if I remember wrong, but I recall reading comments where you said that SASS avoids the combinatorial explosion by using some heuristics and not expanding to the full list. This doesn't seem reasonable for CSS.
I would like to know what an implementer thinks @sesse |
It is also mentioned in the note under :
Happens to be the same note I was looking for regarding specificity Would specificity remain the same as if
.a .b {
.c & {}
}
.a .b {
.c :not(&) {}
} |
It's a bit tricky to comment on this, because I don't think the boundaries of the proposal really have been fully laid out yet. So let me first assume that we don't mean to support token pasting (e.g. Second, do we intend to do this expansion before or after CSSOM? Let me give a very short introduction to how CSS selectors work in Chromium, to the point over oversimplification. We store two different representation of a stylesheet in memory, where one is roughly an array (for parsing and CSSOM purposes) and the other is roughly a hash table (for matching purposes). The former looks very much like what you'd write in a stylesheet, which the notable exception that selectors with selector lists are parsed into a tree structure and the combinator set is a bit more explicit (each and every simple selector has a combinator towards the next element). E.g., if you write The second, which we actually use for matching, uses the same complex selector representation, but is otherwise completely different. In particular, this representation rests pretty firmly on the concept of the (complex) selector as a unit; if an author writes Supporting the current spec is super-simple for us. Whenever we see Now let's look at the case where we are to treat However, the bigger problem here is selector lists in the parent. I can't find any usable ways of dealing with that at all. Not only would we need to know ahead of time that we'd need to start matching of the inner rule twice, and then know which of the two parents to jump to; we wouldn't know where to store the rule in the first place. (We absolutely need two selectors So we need to expand; there's no way around it. I see that people think this is a lower cost for us than Sass, but it absolutely isn't. The wire cost is real, but it can be compressed, and our memory representation is going to be much larger than what you'd see on the wire even without compression—and Sass only needs to run once, whereas we'd pay the CPU costs on load (when we recreate the second form, either due to load or a stylesheet change) and the RAM cost for as long as the page is loaded. I also see people claim that both the current spec and an expansion-like form are “multiplicative”, but that is entirely wrong; the current spec is additive (n rules always cost O(n) memory, O(n) setup cost and usually O(1) match cost—O(n) at most) and expansion is indeed multiplicative, leading to exponential behavior when chained (n rules can cost O(2^n) memory, O(2^n) setup cost—match cost is still usually O(1), but now O(2^n) at most). Authors already know next to nothing about style performance. If you give them this kind of behavior, it's just a huge footgun, and there's no way they will understand that their innocent CSS blows up completely in the browser. We could try to put limits on this, but it's not entirely clear how that would work. Where would these limits be enforced? What would happen if they are violated; would rules stop matching silently? What happens when some of the rules get modified in CSSOM; does the modification fail, or does something else happen? If we didn't have complex selector lists in the parent, we wouldn't have these problems. We could expand; it would cost us some memory and be a bit slower in the setup phase, but it would be restricted to rules that actually use nesting. Even without parent lists, I don't really think trying to match The third alternative is the infamous Sass heuristics, but if so, someone would have to specify what those are if we are to discuss them. And again, remember that they'd have to be efficiently implementable very quickly (whenever a stylesheet loads or changes), so it's not a given that we can just copy Sass' implementation. As a separate note, it is a problem to try to radically change the spec basically at the moment we are trying to ship it. This is a great way to discourage people from being the first to ship a feature. |
fwiw, I posted a couple of polls on the topic: Twitter thread, Mastodon thread. |
WebKit has also largely implemented nesting in the past few weeks if I read the commits correctly. Would be good to hear from them too. |
I've been using SASS significantly for only a year or two, and this did and does match my intuition. I think it is a LOT easier to understand than the implicit @sesse makes some really good points though, and making a change like this now would seem to be something that will significantly delay the release and/or make it much less performant. I could probably get used to the non-SASS way, but converting old SCSS files is going to be harder than I was imagining. |
Adding @anttijk |
Results of my mini-poll: There was only ~350 votes per poll on Twitter, and ~150 on Mastodon. Mostly limited to folks who follow me. Folks were pretty divided when it came to: .a .b {
.c + & {
background: green;
}
} Twitter:
Mastodon:
Note: I didn't present these selectors as options. Instead I described the behaviour. There was a stronger preference when it came to: .a .b {
.c & {
background: green;
}
} Twitter:
Mastodon:
|
I would have been interested to see the results for |
To me inserting an WebKit implements this feature by desugaring the selectors (rather than having a "parent selector" visible in selector matching) so can easily adapt to changes. |
What do you do if there's a selector list in the parent? Do you get exponential RAM usage if many of these are nested? |
The current implementation desugars with On the other hand I'm not sure how important this really is in practice. To get the same behavior without nesting you'd have to write out each combination by hand. |
Won't it still be?
If you desugar, you'd end up with exponential :is() lists and thus RAM usage, right? |
Yeah, you can definitely break it. There are many ways to consume all memory in web platform if you want. In practice there would be limits. |
A quick informal ask around to a handful of devs I know didn’t realize it behaved as is behaves the way it does. They all thought it didn’t really make a difference and would act the same as Simultaneously I received this feedback from Ana Tudor, a well respected voice within the developer community who’s known for pushing the capabilities of CSS to its limits:
|
If it remains with an implicit Changing it will allow both options to work, be more expressive, and easier to read naturally (for the simple fact In both cases, we can live without nesting indefinitely, by using more verbose selectors and preprocessors, so there should be no impetus to push this forward in a state that doesn't fulfil what developers need from it. Clearly it's going to scupper current progress on implementation work, and require some new thinking on how to implement without excessive resource consumption, but this is something that will have to be achieved (as per |
You are missing an important detail: Changing it will most likely make sure it isn't rolled out at all. Not delay it; stop it. It's unlikely we'll want to implement something that blows up into exponential memory use so easily and without warning, and if nothing else is acceptable to web developers, seemingly the only solution is to do nothing.
Yes, it sounds increasingly like we should just declare css-nesting-1 a painful learning experience, delete it permanently and go do something else. |
There's a restriction in Sass, not in the current nesting spec, that the & has to be the leftmost part of a compound. That could make it a bit easier to implement it with a synthesized special :is()-like that keeps track of the element matching the leftmost compound inside the :is()-like and continue adjacent/descendant matching to the left using that element as the candidate, but that wouldn't easily make specificity match. We would have to compute the specificity dynamically by matching all sub-selectors to find the one with the highest specificity (assuming the specifity behavior also has to match current Sass). |
I don't think the current spec is inconsistent with [*]: #7211 disallowed Basically, this issue only has implication on relative selectors where the reference to the anchor elements is not on the left-most compound selector, which can't happen with
Most nested selectors have
The natural thing is that |
My general sense here is that this is proposing something putting something that is very preprocessor-ish and not very CSS-ish directly into CSS. But it's been a longstanding preprocessor feature, so people are accustomed to it being preprocessor-ish. On the flip side, doing it the preprocessor-ish way means that (using However, I agree with @sesse that the lack of an efficient in memory representation for the selectors that would result from the change proposed here would be a major problem. (We don't know for sure that it's not possible to do -- but we don't yet have a demonstration that it is possible to do -- and to do in a way that isn't unacceptably complex.) I would be extremely uncomfortable resolving to do this without an explanation of how engines are expected to implement it in a way that is efficient in both time and memory, and agreement from implementors across multiple engines that such an approach would work. I think we should avoid adding more performance cliffs into CSS where using simple-looking features can cause extremely poor performance (in speed or memory use). |
So to sumarize : At this time it does not seem realistic to implement nesting in a way that is so close to nesting in preprocessors that it is indistinguishable for most users/projects. Preprocessors can change their implementation to match nesting as currently specified and deal with all the fallout from such a breaking change. This is a short term problem. But if authors agree that the resulting feature is less useful it means that implementing nesting in browsers has taken something away from authors instead of adding a new feature. Preprocessors could choose to keep their implementation and always desugar to unnested CSS. But this splits the ecosystem (dev tooling, learning resources,...).
Leaving it as a preprocessors only feature might be better for authors? |
I think there is also a real-worldish performance concern in current engines with the |
Nothing really stops you from doing an OR query against the bloom filter, though (though we don't do that currently, and for very complex cases, it may not be feasible). |
Right, but that is equivalent to desugaring all the variants of the selector. |
No, it's not? It's a fuzzy query. For something like :is(.a, .b, .c) :is(.d, .e, .f) div { … } This will cause at most six lookups in the bloom filter, but desugaring will create at most eighteen (nine rules, each doing two checks). |
You are right. I'm not sure how easy it is to expand these optimization at all without affecting performance with the existing content. (Note that if you don't generate all the same permutations the filtering performance may be substantially worse compared to the desugared case. Say if most of the tree is within |
I agree, and it's not something we've been spending much thought on, beyond the obvious optimization of :is() with a single selector inside (which we treat generally the same as that selector, both for bucketing and for Bloom filters). But at least we believe this is possible to do if it should become a performance problem in the future (i.e., users commonly write nesting selectors with lists in the parent selector), whereas it's not obvious at all how to implement an unbounded desugaring model without risk for blowup (in the same situation). |
Not missing it, I'm just not convinced the implementation is such a hurdle that it won't eventually be overcome under the weight of developer needs, so I don't think it's a valid argument for the simpler option to be locked in and rolled out. If implementation stops, but one day starts again, is that not a delay? (I have no doubt it is a difficult challenge that requires lots of rethinking, and potentially large architectural changes that are simply not workable for implementers currently.)
No, I was just comparing this situation as a developer desire for a feature that was long thought too expensive to implement, but a way to achieve it was eventually found.
That's fair for the most part, but I wouldn't like to see future emerging patterns and features hindered by that metric.
Where does this interpretation of "natural" reading come from? Is there something already in CSS I'm missing that would suggest It would seem the current behaviour is a big foot gun, requiring a large list of "gotchas" in learning resources (especially with the selectors being forgiving unlike most of CSS).
Avoiding quoting your entire comment, I'm not against this, but it has lots of short and long term issues that you touched upon. If this spec is to go ahead as it is, it serves a mostly different purpose to what preprocessors are doing (which I understand was the original intent, but this was certainly co-opted for its similarity to preprocessor nesting as a desirable feature), which means the syntax needs to clearly differentiate itself from preprocessor nesting, so that they can easily coexist and not cause confusion (which many clearly didn't know would need to be the outcome when feeding back on syntax). I imagine that means either picking one of the less favourable syntax options, or preprocessors changing their syntax moving forward, but I still think someday we will end up in a situation where both exist in CSS itself, so may as well plan carefully for that. To avoid going too off topic here, see my proposed solution in #8329. |
Is it possible to define a grouping operator similar to |
I agree that authors familiar with preprocessor nesting will (at first) be surprised by this in the situations where it comes up - which is not the majority of nesting, but also not rare. But I'm not convinced by the arguments that one approach is more intuitive than the other outside that existing reference, or that authors would have to imagine an Rather, the current approach:
In Chromium browsers, with Experimental Web Platform features turned on, all but the final selector here match the same elements (see codepen demo): /* Similar to `.a :is(.b .c)` currently */
.b .c {
.a & {
background: cyan;
}
}
/* A scoped selector in the same shape */
@scope (.b .c) {
.a :scope {
border-style: dotted;
}
}
/* Reversing the shape, with .a as the scope */
@scope (.a) {
/* only the right-most matched element must be in-scope */
.b .c {
border-color: maroon;
}
/* can be narrowed more explicitly */
:scope .b .c {
font-weight: bold;
}
} This would change how people use nesting, and it will take some time for authors to get used to it - with |
Yeah, I'm going to withdraw this suggestion, for a few reasons.
The only thing that gives me pause is that this suggestion's behavior can't be reproduced manually (without just unnesting entirely), while if we switched to this suggestion then the current spec's behavior can be easily opted into. I think it would be valuable to pursue having a way to opt into this suggestion's behavior, and will open a fresh issue for that. |
For completeness, to compliment Miriam’s post above, note that the <div class="a b">
<div class="c">nesting & scope</div>
</div> To make it bold, a slight adjustments to that last selector needs to be done by removing the descendant combinator from between @scope (.a) {
:scope.b .c {
font-weight: bold;
}
} EDIT: Oh, looks like this just crossed Tab’s closing of this issue |
When I wrote the Nesting spec originally, I specifically defined it so that
&
was an ordinary selector, that happened to match the elements matched by the parent rule. This leveraged the powers that only a live browser could have, which seemed useful. Preprocessors, unable to do the same, instead have to make their "nesting selector" more of a macro, that is substituted with the parent selector.That is, in a rule like
In the Nesting spec, the
&
matches "a .b element with a .a ancestor", and the nested rule imposes an additional "...and has a .c ancestor", so it's the equivalent of.c :is(.a .b)
. It'll match all the following markup structures:Preprocessors can't do this (or rather, couldn't do this before
:is()
was standardized, and still can't widely do it since support for:is()
is still in the low 90%s). Expanding such a selector fully (to.a .c .b, .c .a .b, .a.c .b
) would mean a combinatorial explosion in selector length, and gets drastically worse the more combinators are in either selector. (Sass has a similar situation with its@extend
rule, where it uses heuristics to select only a few likely expansions, rather than all possible ones.) Instead, they substitute the selector directly, giving.c .a .b
, and only matching the second of those three markup structures.I was discussing Nesting with @nex3 a few days ago, and she expressed surprise and some dismay that Nesting had this behavior. She believes that a large amount of CSS written with Sass nesting expects and depends on the current Sass behavior, not accidentally but quite intentionally - they expect this sort of nesting pattern to be expressing "and then a .c container around the previous stuff" rather than just "and then a .c container somewhere around the specific previous elements". She believes this would actually be a significant burden to people migrating from preprocessor nesting to native nesting, and even for people who are using native Nesting fresh, believes it'll be a common point of confusion for people using this pattern.
As well, under Sass's behavior, an author can recover the current spec's behavior by wrapping the & in an
:is()
, like.c :is(&)
(or wrapping the parent selector). However, there is no way to recover Sass's behavior from the current spec's behavior.In the majority of cases it doesn't make a difference which way we go. The most common relative selector case (a nested selector like
.c
or+ .c
) is unchanged, and many non-relative selectors are also unchanged, such as& + &
or:not(&)
. Only cases like the above, where there is a descendant combinator in the parent selector and a descendant combinator in the nested selector with an&
following it, will change their behavior.I know this is a pretty significant semantic change to be coming at this late time, but Natalie felt it was pretty important, and @mirisuzanne backed her up, and the arguments were reasonably convincing to me.
(Note: we'd still not be doing selector concatenation -
.foo { &bar {...}}
is still equivalent tobar.foo
, not.foobar
.)The text was updated successfully, but these errors were encountered: