Skip to content
This repository has been archived by the owner on Jan 24, 2024. It is now read-only.

Warn correctly or don't warn at all about "experimental" CSS features #2402

Closed
Elchi3 opened this issue Nov 13, 2019 · 23 comments
Closed

Warn correctly or don't warn at all about "experimental" CSS features #2402

Elchi3 opened this issue Nov 13, 2019 · 23 comments
Assignees

Comments

@Elchi3
Copy link
Member

Elchi3 commented Nov 13, 2019

User story

As an MDN reader or devtools user, I want to be warned correctly about CSS features that are marked "experimental", so I can take necessary steps or avoid using given features. If the warnings are wrong, then I don't want to be warned about "experimental" features at all.

Acceptance criteria

Only CSS features that are worth calling out "experimental" are marked as such in BCD for CSS features and on MDN CSS pages. This might mean that we're deciding to get rid of the concept of "experimental" altogether.

Tasks

    • Decide if any features are worth being called out as "experimental".
    • Update status.experimental in BCD depending on the outcome of task 2.
    • Update / remove any banners (experimental, SeeCompatTable) on MDN CSS pages according to the outcome of task 2.

This needs coordination with devtools folks who have implemented calling out "experimental" CSS features in their latest draft of the web compat tool. See the screenshot here: https://groups.google.com/forum/#!topic/mozilla.dev.developer-tools/U350YHcJZac

The BCD issue about removing the concept of "experimental" altogether (not just for CSS features) is mdn/browser-compat-data#1528

@chrisdavidmills
Copy link
Contributor

@rachelandrew this is another task that we'd love your help on in the upcoming sprint, depending on how much of your time is needed for it.

@a2sheppy
Copy link
Contributor

I have a proposal for improving how we manage the "experimental" flag. Currently, everything that's experimental is supposed to be marked that way.

My suggestion is this:

  • Add an experimental flag to the SpecData.json file (or, better, find a way to add this flag to the BCD repository). If an entire specification is experimental, set that flag. Do not set experimental on any of the API's members in the BCD JSON.
  • If a non-experimental API has an interface or other "top level" member which is experimental, set the experimental flag on it. Do not set the experimental flag on any of its members in the BCD JSON.
  • If an API and a member interface or dictionary or whatever are not experimental, but a member of the interface or dictionary is experimental, set the experimental flag on that member.
  • Update the macros that generate badges or banners based on the experimental flag to check ancestors in addition to the object itself, until either the experimental flag is true or the top of the hierarchy is reached.

Basically, what this boils down to: set the experimental flag on the highest level item that covers the experimental nature of things, and do not set it on any of the lower level items in the hierarchy.

For instance, if the Web Nuclear Fusion API spec is experimental, set experimental on it in SpecData.json (or in BCD if we add this capability there), and you're done. None of the interfaces or other items contained anywhere within the API should be marked experimental.

Or if the Web Toaster API is non-experimental, but its non-experimental interfaceToast has an experimental function, eat(), set the experimental flag to false on the API and the interface, but to true on the eat() method in BCD.

This way, we minimize the amount of maintenance we have to do when managing experimental technologies, since we don't have to change the BCD for every single component of an API if the entire thing is experimental, and when the API "graduates", you only have to change one value.

@rachelandrew rachelandrew self-assigned this Nov 14, 2019
@rachelandrew
Copy link
Collaborator

I'll go through and update any that are definitely not experimental (I think a whole batch are logical properties which are now in 2 browsers), whatever is left I'll come back with so we can work out what to do about them.

@rachelandrew
Copy link
Collaborator

This is going to take a bit longer than my optimistic one day, but I've made a pretty good dent in it today. I've put them all into a spreadsheet and am working through the obvious not-experimentals to fix the data/pages and I'll then post a list of ones we might need to make a call on, along with info which might push them one way or the other.

I think the true "shipped in a browser while still experimental" stuff is happening less. The things that are like that are typically quite old. These are features which were implemented in one browser, sometimes prefixed, and the spec is still being worked on, which is why we don't have other implementations.

I think in general that even if we only have one implementation, if the spec is at CR, and there are no issues flagged in the spec of the WG GitHub about that feature we're probably safe to not have it marked as experimental. At that point we're talking about lack of implementation rather than changing spec. Anyway, I shall continue with this.

Couple of PRs below to fix up BDC data.

@ExE-Boss
Copy link

If we’re putting an experimental flag to SpecData, then we should probably do that in mdn‑data once mdn/data#397 is merged.

@Elchi3
Copy link
Member Author

Elchi3 commented Nov 20, 2019

If we’re putting an experimental flag to SpecData, then we should probably do that in mdn‑data once mdn/data#397 is merged.

We are not doing that.

@a2sheppy
Copy link
Contributor

If we’re putting an experimental flag to SpecData, then we should probably do that in mdn‑data once mdn/data#397 is merged.

We are not doing that.

OK, so what precisely do you propose we do? I don't think it's appropriate to outright get rid of the concept of experimental APIs, because they do exist, and they do get shipped enabled in browsers, even though they probably should not. Developers need to be aware of that when it does happen. So we can't just remove the concept of labeling things as experimental, which was one of the proposals made.

@rachelandrew
Copy link
Collaborator

@a2sheppy @Elchi3 I think with these CSS ones there are a set of genuine experimental things. Mostly these are due to shipping of stuff in one browser while the spec is being worked on, sometimes behind a flag and sometimes not. The not behind a flag is not happening so much now but in the past a few things did and they still hang around.

If we are going to document things which are behind a flag, and I think it is useful to do so as it helps people offer feedback, then I think it is worthwhile explaining that the spec may well change. Ultimately that's the reason to put something behind a flag in development.

I feel as if it might be good to link to an explanation somewhere about how these things are developed, why we have experimental things in browsers (and why it is helpful to look at them, where to offer feedback) rather than just having an experimental flag with no explanation of the situation.

I kind of have a few groups of things as I work through:

  1. Things which are implemented with a stable spec (Not experimental and I am updating these)
  2. Things with at least two implemenations but a spec still in ED/WD. I think we need to make a call on these case by case.
  3. Things with only one implementation but a spec in CR (good arguments to unflag some of these on a case by case basis)
  4. Things with flagged implementations, unstable specs, issues raised against them in specs (definitely experimental)

@rachelandrew
Copy link
Collaborator

So I've made a first pass through these. In this spreadsheet the ones marked in green I think probably are genuinely experimental https://docs.google.com/spreadsheets/d/14Ah74a8rpxtvSHe7T3ivzry5VvwNrL9XZFMeKsKJGPY/edit?usp=sharing

I have already updated the BCD for those that definitely are not. I think in reality most of the others (that I haven't already done, and that are not marked in green) could probably have the flag removed. There are a few that don't have implementations, but there is nothing to indicate them being experimental as such. However given that a spec can't move past CR without there being two independent implementations it is of course possible things could change there. But it might be reasonable to have things marked experimental if they:

  • have no implementations at all
  • have issues raised against them in the spec

Anyway, I probably need some other opinions at this point but I've done a lot of BCD updating as part of this!

@Elchi3
Copy link
Member Author

Elchi3 commented Dec 2, 2019

Thanks for this heroic effort, @rachelandrew 💯
This is definitely a big step towards much saner usage recommendations for CSS features. I'm sure we will get a lot less complains about misused "experimental" banners/flags now. 🎉

Some thoughts on the overall situation:

The two things you list for reasonable "experimental" status generally sound good to me. I wonder, though:

If we have no implementation at all, should just let that speak for itself? Why would we need to also call it "experimental" in addition to have an all red compat table? I think the core problem is that we've coupled implementation status to "experimental" and we then forget to update "experimental" when implementations actually happened.

And for things that have issues raised against them in the spec, should we rather call these features "unstable" and really only mark features as "unstable" when it is the case? Is "experimental" too unspecific? Do you think we should always have an explanation or a link to the spec issues, so that if such issues are resolved we can also resolve the "unstable" / "experimental" state in our data?

@rachelandrew
Copy link
Collaborator

I like the idea of "unstable" with links to the spec. Mostly these are going to be features with some history at this point as we don't get half baked stuff implemented so much these days. Even things which have changed after shipping - for example the renaming of grid-gap to just gap get aliased, so if things are in browsers they should probably be classed as usable in most cases.

I'd be happy (if @chrisdavidmills is) to continue this effort and fix up the remaining ones if we have a decision.

@chrisdavidmills
Copy link
Contributor

I like this idea, and am happy for @rachelandrew to continue to help with this work, but I want her to continue on the learning area assessments work next. I'd like to see a plan for how much work there is to do here, what it would involve, etc., so we can figure out slotting it into our schedule.

@rachelandrew
Copy link
Collaborator

@chrisdavidmills I'm on the learning area stuff now. In terms of updating the ones I didnt update because I wasn't sure if we wanted to keep them experimental, it's probably a couple of days, add an extra day if I also need to flag the spec issues as I'll need to go find the exact issue and link to it etc.

It would probably be fastest to do sooner than later just because I've had my head in it recently so I won't have to spin up again and remember where I was.

@chrisdavidmills
Copy link
Contributor

@rachelandrew OK, that's a fair point. Would it be OK if you just finished the little bits of assessment stuff for the current user stories, then went back to this work, and then after that did some more assessments work?

@rachelandrew
Copy link
Collaborator

@chrisdavidmills sure - I think put this in the next sprint and I'll tackle it once I've done the assessments stuff. Would be quite nice to feel it is tidied up having put so much time into it :)

@chrisdavidmills
Copy link
Contributor

@rachelandrew sounds good; totally agree!

@rachelandrew
Copy link
Collaborator

@Elchi3 if we are going to switch to unstable and link to the spec, how should I do this in the data?

@Elchi3
Copy link
Member Author

Elchi3 commented Dec 4, 2019

I think it would require a change to BCD. Could we maybe first collect these things as a draft in a BCD issue and then discuss how we model it into the data structure? A (diverse) set of examples for "unstable" would help enormously to model this. Does that seem like a sensible approach?

@rachelandrew
Copy link
Collaborator

sounds good to me, what I'll do then is I'll go through and fix the BCD for the ones that don't have spec issues and make a list of those that do in an issue with their links and so on as a new issue against BCD.

This was referenced Dec 23, 2019
@rachelandrew
Copy link
Collaborator

So, I think I am through these. I have created a set of data as suggested above to list the features that have something sketchy about them: mdn/browser-compat-data#5392

Otherwise pages are updated, BCD is updated, there is a PR for data to fix the sidebars to match the BCD and pages. I think this part of the task is done pending all those PRs being merged.

@Elchi3
Copy link
Member Author

Elchi3 commented Jan 7, 2020

Rachel, happy new year and thank you so much for this work! This makes me soo happy! 💯

We can now call this done and continue the conversation in mdn/browser-compat-data#5392 where I'm eager to find a better concept for "experimental" or "unstable" features in BCD thanks to your fundamental research! 🥇

@Elchi3 Elchi3 closed this as completed Jan 7, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants