-
Notifications
You must be signed in to change notification settings - Fork 113
Public fields "Define" semantics and subclasses #176
Comments
Using define+set, how would it be possible to create a decorator to make a field non-writable? This would throw, or define a writable property after tc39/ecma262#1320: class Foo {
@readonly x = 2;
}
function readonly(desc) {
desc.descriptor.writable = false;
} |
I don't see what's new about this scenario. I agree that it's possible to write a code snippet which will be locally surprising, but that's true about all semantic choices. I think TC39 was thinking about this sort of case when it made the Define call. |
Does anyone have an equally compelling code example where [[Set]] behavior is "locally surprising"? (I fully realize that "equally compelling" is somewhat subjective, so interpret that however you like.) |
If we must live with [[Define]] as the default (however good or bad that turns out to be), then I think a decorator would be the best workaround for this case. Example: class Sub extends Base {
@instance
set x(value) { console.log(value); }
} |
@nicolo-ribaudo possibly with an function readonly(desc) {
desc.extras = [{ kind: "initializer", placement: desc.placement, initializer: function() {
// set writeability _after_ field is set, but only if it is an "own" property.
const prop = Object.getOwnPropertyDescriptor(this, desc.key);
if (!prop || prop.get || prop.set) throw new Error(`'${desc.key}' is not a field`);
prop.writable = false;
Object.defineProperty(this, desc.key, prop);
}];
return desc;
} |
This comment convinced me that [[Set]] can't be worst than [[Define]]. I tried really hard to give an example, but I couldn't find an example as realistic as the arguments against [[Define]]. // Library code
class MyComponent extends React.Component {
state = { secret: "secret" };
}
// Attacker code
React.Component.prototype.__proto__ = new Proxy(React.Component.prototype.__proto__, new Proxy({}, {
get(target, name) {
return function (...args) {
console.log(name, args);
return Reflect[name](...args);
};
}
}));
// User code
<MyComponent />
// with [[Set]] it logs "set", MyComponent{}, "state", { secret: "secret" } which can have potential security implications, but
|
I just took a day to look at this proposal yesterday. I have to say my vote goes to |
I think your proposal could solve half of the problem. But still leave another half: // base v1.0
class Base {
constructor() {
this.x = 1
}
} class Sub extends Base {
x = 100;
} // base v1.1 refactor to getter/setter
class Base {
get x() {...}
set x(v) {...}
} The essential issue is |
Chrome is shipping public fields with Define semantics in its beta branch. As I have explained in other threads, we don't have ecosystem consensus towards one or the other of these alternatives, and TC39 has considered this question and length and settled strongly on the Define side. I'm not sure what benefit we will get by continuing to discuss the issue. |
@littledan Chrome shipped it because it already stage 3. The problem is why it can forward to stage 3 when there are still several serious issues like this. It is said "TC39 have the consensus" on all issues, but obviously some never agree, or may change their mind later. So how to deal with such situation?
Yes. Without fixing process first, we are all wasting our time. |
I think the way to address these situations in the future is to discuss the issues at Stage 2 and come to a conclusion before reaching Stage 3. See the explainer for a summary of the Set vs Define debate. |
@littledan Thanks for the adding the summary to the explainer/readme. For stage 2 proposals in general, where is the possible schedule for advancement to stage 3 announced? BTW, I doubt that most developers (including myself until recently) realize that stage 3 basically means all syntactic and semantic details are final and not open to reconsideration unless some very major and unforeseen problem crops up. Even with this description of what "Acceptance signifies" from https://tc39.github.io/process-document/:
...it still sounds like if there were a lot of negative community feedback on a particular issue, that the design decision could be reconsidered even if someone had pointed out the issue already and the committee was aware of it at the time they moved the proposal to stage 3 (especially if the volume of feedback and the negative impact were much greater than anticipated). But what I'm hearing from you and other committee members is that only new and unforeseen issues would have any sway. I actually don't think that there will be some huge negative reaction on this particular issue (for one thing, it's an edge case) and maybe it will even be more positive than negative; I'm just describing a hypothetical situation here to understand the process. |
P.S. At the very least, I think the process document should be updated to indicate that stage 3 means shipping in browsers and/or node.js (at least 2 implementations), not behind an experimental flag. |
@mbrowne This is a really good point. We've discussed many process clarifications like that, but were unable to get consensus on them, unfortunately. We're working on improving our supplementary documentation on how the stage process works in practice, but this documentation is still under internal review and not released yet. I'll be happy to get your feedback on the document when it's released. |
Closing as per #176 (comment) . |
I don't agree with the decision to close this issue. The point of Stage 3 is to implement, experiment, and determine if there are bugs or issues with the semantics of the proposal. I don't believe this issue was discovered or discussed during Stage 2, and until the feature is officially part of the spec we should be open to discussing possible issues with the implementation, along with possible resolutions. If it is truly too late to make a change in semantics, then I would propose adding a restriction that would result in a runtime error during ClassDefinitionEvaluation when a subclass attempts shadow a field defined on a superclass with an accessor, as this will never work and will catch users by surprise. |
It's fairly easy to claim that this particular issue wasn't accepted even by the developers that TC39 mentions when they claim wide acceptance of this proposal, as back then babel and TypeScript used |
@rbuckton I don't agree with the decision either, but if I recall correctly this is inaccurate:
(It's probably in the meeting notes from when this issue was still being debated by the committee, if anyone cares to dig it up.) To reiterate from @littledan's earlier comment in this thread:
|
@mbrowne's recollection is correct. This belief is incorrect. Set-vs-Define was discussed at length. |
One simple question: After the direction settled on Define, was sufficient time given to babel trials to ensure a reasonable level of acceptance from the public? |
I don't know what is considered "sufficient time", but:
|
I'm not talking about Set-vs-Define in general, I'm talking about the specific subclassing concern described in this issue: A field in |
To be clear, the goal of this issue was not to rehash Set vs Define, but to point out an inconsistency in user expectations vs the reality of the spec text. Perhaps it was a mistake to include a proposed solution in the original issue description above. I have a very strong concern that the specified semantics could be a source of confusion for users. As I said in #176 (comment), if the proposed solution (splitting the declaration into a Define and a Set) is not feasible, perhaps it would be better to do our best to error on this specific case so as to avoid a potential footgun. To clarify, given the following code: class Base {
x = 1;
}
class Derived extends Base {
get x() { … }
set x(value) { ... }
} A user might assume that the declaration in Barring a solution that might make this work as expected, I would at least propose that we make this an error during ClassDeclarationEvaluation. If the user's intent was to shadow |
@rbuckton In general, it's possible to extend any constructor, not just classes. And the class hierarchy may be dynamically manipulated at different levels. We've specifically tried, in the class fields proposal, to respect these manipulations. What do you imagine the semantics of this check would be? |
As TC39 representative of 360, I have to make it clear that 360 decide to block class fields proposal to stage 4, and this issue is one of the important concern. |
Roughly this:
These last four conditions are indicative of custom inheritance solutions that are outside the scope of this restriction (i.e. they are still available as workarounds for specific situations). The goal is to provide an early warning to a developer that something is wrong. If the developer decides to use more advanced APIs to manipulate the prototype of the class, the prototype chain of the class constructor, or the instance returned by the constructor, they are still able to do so. |
My suggestion doesn't maintain the list of fields from the prototype chain indefinitely, only during the course of ClassDeclarationEvaluation. Once the class declaration has evaluated, we throw this information away. The only thing that would be maintained is each individual class would carry a list of its own fields (which it effectively must do anyways for field initialization to occur during construction). I'm suggesting we leverage this existing knowledge to throw an error to avoid a footgun. |
Using A bit off-topic, but it would be great if an error could be thrown for this case as well: class Parent {
x = 1
}
class Sub extends Parent {
x
} Yes it could be a linting rule, but why should the language allow usage in the first place that never serves a useful purpose and will only cause confusion? To deliberately override the default value and set it to No need to reply to this part of my comment—I don't want to sidetrack this thread—just wanted to mention it as something to think about and possibly discuss in the future. And also to express my agreement with the idea of an error message to help prevent foot-guns (the topic of this thread being a much more important foot-gun to address). |
@mbrowne Neither your issue nor the specific issue @rbuckton raised in this thread can be solved by linter, because base class could be in a separate package and out of your control. Linters normally work per-module and it's impractical to go through deep dependencies and scan all source code for an edge case like this. Even TypeScript can not help u, because it rely on *.d.ts to provide type info and currently there is no way to differentiate accessors with data property in TS interface. And I feel adding such subtle info into TS type system and emit to *.d.ts is meaningless for most usage and only increase the burden of all maintainers and users. |
@hax Yes, linters have limitations. I think that linting just one's own codebase for cases like these could still be quite valuable, but now that you mention it, it could be problematic for mono-repos and not just third-party packages. |
These checks seem pretty fragile and not the kind of thing I would expect to see added to the main ES language. Three issues that come to mind:
I can think of various ad-hoc ways through all of these, but it just feels like digging ourselves deeper into a hole of more complexity. Even if single-file linters may have trouble with this enforcement, type systems should be able to do a good job. |
@littledan There's a way to do this without all the complexity. Following your bullet points:
I'm willing to bet that almost (if not all) of the ideas you can think of involve some form of post-definition processing. If you drop all ideas that come in after-the-fact, the complexity of this idea falls off sharply. |
@rdking To be concrete, could you describe the algorithm you have in mind? Then, I can clarify with code samples to show the issues I am concerned about. |
It's not very different from what @rbuckton originally stated. Let me describe it this way. Given 2 classes such that
Since all of this only occurs during the evaluation of a |
@rdking @rbuckton Thanks for making the algorithm clear. I'm still not quite convinced that it's likely that people will construct subclasses where they aren't thinking about these sorts of name overlaps, but I do think that this kind of check would be impractical to make work consistently. Here are some cases where it won't quite kick in. Return overrideFor example, you could have a parent class which is a factory, returning an instance of another class, and then you could subclass that factory. The check would not be able to understand the overlap for the object returned from class B { x; }
class A { constructor() { return new B() } }
class C extends A { x() { } } // fails to throw, but the field shadows the method We specifically decided to support return override in conjunction with class fields, so that these fields could be used flexibly, for example, with HTML custom elements. Intervening ES5-style subclassIf the check is based on lists that are linked together based on ES6 classes, then an ES5-style class in the hierarchy will break the checks, since the function will not have the internal slot linking it to the parent class. How could it? There was no class A { x; }
function B(...args) { return Reflect.construct(A, args, new.target); }
B.__proto__ = A;
B.prototype.__proto__ = A.prototype;
class C extends A { x() { } } // fails to throw, but the field shadows the method ES6 classes were always designed to interact transparently with ES5-style classes (even if it wasn't possible to use Mutating parent classIf the check is based on the parent class when it was declared, as in then the check will not reflect any further mutations. class A { }
class B extends A { }
class C { x; }
B.__proto__ = C;
B.prototype.__proto__ = C.prototype;
class D extends B { x() { } } // fails to throw, but the field shadows the method Note that both public and private fields are created on the instance based on the prototype chain at the time the class is instantiated, not the extends clause when it was defined. This decision was made in order to maintain a correspondence with internal slots, where a subclass may be dynamically reparented to a built-in class with internal slots, and these slots will be created on instantiation. Pulling it togetherThere are other variants on the above checks that could get around the issues noted above, but instead cause new issues. This includes doing a prototype chain walk on subclassing (observable on Proxies, fragile with ES5-style subclassing, and still brittle with after-the-fact prototype chain manipulation) or Has checks during instantiation rather than subclassing (which would significantly complicate the operations observed by Proxies, and add a large and rather incongruous mental space in the object model). I'm happy to discuss these further if you want, but this post is getting pretty long already. Overall, do these edge cases matter much? I'm not sure if they matter much from the perspective of ordinary programming. But when designing the semantics of JavaScript in TC39, we tend to emphasize the value of sound, consistent semantics: we aim for a language with a comprehensible mental model, down to the details. Checks in the language tend to be simple, consistent and strong. This is a weak check--it doesn't really prove anything in particular. I'd prefer to keep such checks in tools, which can evolve practical, unsound strategies more flexibly than we can in the language itself. |
Fair point, but what is the actual downside of a check that doesn't cover 100% of edge cases, as long as it works for standard usage and best practices (or at least what's the best practice 99% of the time) of class-based programming? If any programmer is using the absence of errors from the JS engine alone as proof that their code is production-ready, then there's a much bigger problem... The counterpoint to your argument is that preventing a footgun 95% of the time is better than not preventing it at all. Another factor to consider is the complexity of implementing this check in linting tools / type systems, given that many NPM packages have already been built using Set semantics. It looks like TypeScript is now planning to address this, but that only helps with packages that will be rebuilt with the latest version of TS, and also doesn't address non-TS packages built with older versions of Babel (or the current Babel version with loose mode enabled). However, regardless of what the language itself does, the fact remains that some sort of linting solution, or more likely a type system solution is still needed, because most people are still using transpilers for class fields, and obviously a validation check in the language itself won't help unless using native fields. |
@littledan all of the issues you mentioned I specifically called out as advanced inheritance features that this recommendation explicitly isn't intended to cover. This is purely about the simplest case of avoiding a foot gun in the declaration. I don't want to prevent advanced cases such as return overrides, etc. My suggestion is intended to find a solution to a flaw in the design of class fields without throwing out the whole feature, as some seem interested in doing. I equate this almost to the requirement for parens to disambiguate There are a number of ways class fields could have been designed to avoid this subclassing flaw. Since the champions are opposed to those changes, this is the only way I can see to help users avoid this flaw in the design. |
I don't want to throw out the whole feature either. |
I don't think this is nearly as big of a footgun as that was, and the price for this proposed fix is much higher: it significantly complicates the semantics, prevents writing otherwise reasonable code, and requires a runtime cost paid by every user of every website using class fields. I personally don't think this is a good idea. I don't regard this behavior as a flaw - there's not really any obviously correct semantics for mixing fields and methods of the same name across the inheritance hierarchy, and the behavior this proposal chooses is not particularly unreasonable. I also don't think there's much appetite in the committee to change the behavior of class fields here, especially with this sort of ad-hoc patch. We discussed cases like those in this thread during the [[Define]] vs [[Set]] debate and still decided to go with [[Define]]. |
I assumed that the runtime cost would be negligible, because the check would be happening at a time in the evaluation process when all the fields have to be evaluated anyway, and there would be no need to run it again after the class was created. Are you saying that it would be enough of a performance cost that it would be noticeable in realistic benchmarks? |
I find that this inconsistency in inheritance semantics fails this goal. The semantics of class fields compared to the existing inheritance model for methods/accessors are inconsistent and don't align with the mental model of either OO developers coming from other languages, nor existing JavaScript developers who are already familiar with class inheritance semantics in JavaScript. In most other OO languages, fields and other members cannot have colliding names. While I understand that JavaScript is not one of those languages, and doesn't need to be, that doesn't mean that we shouldn't consider finding a solution. My suggestion to restrict member names during class declaration evaluation is intentionally weak, so as to allow developers to be able to "work around" this behavior in advanced scenarios. A stronger approach would be to have field initialization throw if an existing member is present during instance creation. That would result in far more reliable and consistent semantics, but would be significantly more restrictive than the "weak" check: Subclassing and "overriding" a field with another field would become an error, field declarations like There may be other solutions we haven't considered yet, and I'm open to discussing them, but I still feel that this is a significant enough footgun for new users that we should investigate some mechanism of informing the user they're doing something wrong. If we are unable to fix this, we in effect push the responsibility for this issue to the educators, sites like MDN or StackOverflow, linters that may not have access to the source code of your superclass, or type systems like TypeScript that can still fail to report an error if the type system isn't provided enough context. The problem with passing on the responsibility is that "you don't know what you don't know": hobbyist and experienced developers alike could run afoul of this inconsistency in unexpected ways if they haven't extensively researched the obscure corner-cases of this feature. The most reliable place to surface this is in the language itself. |
|
If we went with the stronger check, then those would have to be written as |
Yet your discouragement aside, it’s already an established best practice in the react community, as is “nothing should go in the constructor that can go in a field instead”. |
Using |
@rbuckton That's one of the reasons why I proposed the idea of having the |
@ljharb That's great for the React community, given their shallow inheritance from a single, fixed source. Can you honestly say that this "established best practice" is also equally good for other projects with differing constraints? |
That’s not what I’m saying; I’m saying that because it’s a best practice in common use, other projects with differing constraints still have to operate in a world where that practice exists and has first-class language support. |
@ljharb You didn't answer the question. Nice dodge though. However, in a world with multiple conflicting sets of best practice in varying degrees of "common use", is it better to side with just one? Or is it better to remain neutral, and allow the developer to adopt the best practice that suits their needs? |
I did answer it: of course i can’t say that. I’m saying that the answer is irrelevant. It’s impossible for language design to be neutral - choices always have to be made, and “do nothing” is its own choice with its own tradeoffs. |
No, you implied it. I prefer not to need to infer the answer when I ask a question. Thank you for stating it clearly this time.
Yet another misconception. Being neutral doesn't mean doing nothing. It means supporting all paths that can reasonably be supported. It is possible (in many different ways) to support both I'm 100% certain of what I've just said as I've written several modules experimenting with such behavior and managed to get it to work for all cases except 1 where a Proxy invariant kept me from displacing the effect of |
Just FYI, TypeScript nowadays throws errors for the cases @rbuckton mentioned,but only when @rbuckton and @rdking have a higher number of votes in their replies compared to replies in opposition (which mostly have no votes). I wish those votes counted.
What about just keeping the web easy-to-use and not requiring any build tools? Let's not forget about people who want to simply edit an HTML file and get to making something awesome. I would vote to revoke the existing class-fields behavior and switch to [[Set]], even if that breaks my applications (and those of others). I believe the long-term benefit would far outweigh the temporary breakage. Browsers can log prominent yellow or red messages to the console for a number of months, then finally make the flip. @ljharb Speaking of what people do in practice (like you mentioned with React), what about Angular users? Have you forgotten about them? They've been using decorators, and now those don't work with class-field [[Define]] semantics. That seems to go against your philosophy of siding with existing real-world "best practices". (Unless you particularly have unfair bias towards React winning and therefore purposefully don't tell the other side of the story? I'd like to believe that isn't the case.) |
@trusktr i'm not familiar enough with angular to answer; but i'm very skeptical that Set vs Define actually matters to the vast majority of use cases. If you have evidence for that (not just, "one library breaks", but "lots of code patterns that work with Set don't work with Define"), then i'd file a new issue providing that information, as I'm sure champions and delegates would be very interested to consider it. |
I believe we may need to revisit the Set vs. Define debate of public fields in light of this scenario:
Without an in-depth understanding of the spec, a naïve developer might assume that (a) would print
1
and (b) would print2
, however this is not the case. According to the proposal,x
will be Defined on the instance ofobj
during construction. This means that the fieldx
onBase
will override the accessorx
onSub
, which seems to violate developer intuition that members onSub
should override members onBase
. In fact, the only way forSub
to overridex
with an accessor would be to useObject.defineProperty
in the constructor ofSub
to replace the field with an accessor on the instance.I only see two solutions to solve this issue:
Base
to add a property with the descriptor:{ enumerable: true, configurable: true, writable: true, value: undefined }
Base
to evaluate the initializer.Option 1 is the most consistent with the current semantics that use Define (a field on a subclass overrides a member on the superclass), but solves the above issue.
Option 2 is consistent with the state of existing compile-to-javascript languages over the last 6 years.
The text was updated successfully, but these errors were encountered: