Skip to content
This repository has been archived by the owner on Jan 25, 2022. It is now read-only.

Public fields "Define" semantics and subclasses #176

Closed
rbuckton opened this issue Nov 29, 2018 · 58 comments
Closed

Public fields "Define" semantics and subclasses #176

rbuckton opened this issue Nov 29, 2018 · 58 comments

Comments

@rbuckton
Copy link

I believe we may need to revisit the Set vs. Define debate of public fields in light of this scenario:

class Base {
  x = 1;
}
class Sub extends Base {
  set x(value) { console.log(value); }
}

// (a)
const obj = new Sub(); 

// (b)
obj.x = 2;

Without an in-depth understanding of the spec, a naïve developer might assume that (a) would print 1 and (b) would print 2, however this is not the case. According to the proposal, x will be Defined on the instance of obj during construction. This means that the field x on Base will override the accessor x on Sub, which seems to violate developer intuition that members on Sub should override members on Base. In fact, the only way for Sub to override x with an accessor would be to use Object.defineProperty in the constructor of Sub to replace the field with an accessor on the instance.

I only see two solutions to solve this issue:

  1. Split a field into two operations:
    1. Use Define on the prototype for Base to add a property with the descriptor: { enumerable: true, configurable: true, writable: true, value: undefined }
    2. Use Set in the constructor of Base to evaluate the initializer.
  2. Use Set semantics for fields.

Option 1 is the most consistent with the current semantics that use Define (a field on a subclass overrides a member on the superclass), but solves the above issue.

Option 2 is consistent with the state of existing compile-to-javascript languages over the last 6 years.

@nicolo-ribaudo
Copy link
Member

Using define+set, how would it be possible to create a decorator to make a field non-writable?

This would throw, or define a writable property after tc39/ecma262#1320:

class Foo {
  @readonly x = 2;
}

function readonly(desc) {
  desc.descriptor.writable = false;
}

@littledan
Copy link
Member

I don't see what's new about this scenario. I agree that it's possible to write a code snippet which will be locally surprising, but that's true about all semantic choices. I think TC39 was thinking about this sort of case when it made the Define call.

@mbrowne
Copy link

mbrowne commented Dec 8, 2018

Does anyone have an equally compelling code example where [[Set]] behavior is "locally surprising"? (I fully realize that "equally compelling" is somewhat subjective, so interpret that however you like.)

@mbrowne
Copy link

mbrowne commented Dec 10, 2018

If we must live with [[Define]] as the default (however good or bad that turns out to be), then I think a decorator would be the best workaround for this case. Example:

class Sub extends Base {
  @instance
  set x(value) { console.log(value); }
}

@rbuckton
Copy link
Author

@nicolo-ribaudo possibly with an "initializer" (or whatever we call it)?

function readonly(desc) {
  desc.extras = [{ kind: "initializer", placement: desc.placement, initializer: function() {
    // set writeability _after_ field is set, but only if it is an "own" property.
    const prop = Object.getOwnPropertyDescriptor(this, desc.key);
    if (!prop || prop.get || prop.set) throw new Error(`'${desc.key}' is not a field`);
    prop.writable = false;
    Object.defineProperty(this, desc.key, prop);
  }];
  return desc;
}

@nicolo-ribaudo
Copy link
Member

nicolo-ribaudo commented Dec 10, 2018

Does anyone have an equally compelling code example where [[Set]] behavior is "locally surprising"? (I fully realize that "equally compelling" is somewhat subjective, so interpret that however you like.)

This comment convinced me that [[Set]] can't be worst than [[Define]]. I tried really hard to give an example, but I couldn't find an example as realistic as the arguments against [[Define]].
The only example I found is this:

// Library code
class MyComponent extends React.Component {
  state = { secret: "secret" };
}

// Attacker code
React.Component.prototype.__proto__ = new Proxy(React.Component.prototype.__proto__, new Proxy({}, {
  get(target, name) {
    return function (...args) {
      console.log(name, args);
      return Reflect[name](...args);
    };
  }
}));

// User code
<MyComponent />

// with [[Set]] it logs "set", MyComponent{}, "state", { secret: "secret" }

which can have potential security implications, but

  1. It only works if the attacker has access to the class or to a super class
  2. If you want your data to be hidden, you can use private fields. Note that with [[Set]] not leaking the class to an extraneous scope isn't enough.

@panlina
Copy link

panlina commented Dec 24, 2018

I just took a day to look at this proposal yesterday. I have to say my vote goes to [[Set]]. This example is a good revelation of the problem of [[Define]] here.

@hax
Copy link
Member

hax commented Dec 24, 2018

@rbuckton

I think your proposal could solve half of the problem. But still leave another half:

// base v1.0
class Base {
  constructor() {
    this.x = 1
  }
}
class Sub extends Base {
  x = 100; 
}
// base v1.1 refactor to getter/setter
class Base {
  get x() {...}
  set x(v) {...}
}

The essential issue is x = 100 syntax just implies [[Set]]. So we also need change the syntax (for example: <keyword> x = 100) to complement your proposal.

@littledan
Copy link
Member

Chrome is shipping public fields with Define semantics in its beta branch. As I have explained in other threads, we don't have ecosystem consensus towards one or the other of these alternatives, and TC39 has considered this question and length and settled strongly on the Define side. I'm not sure what benefit we will get by continuing to discuss the issue.

@hax
Copy link
Member

hax commented Dec 27, 2018

@littledan Chrome shipped it because it already stage 3. The problem is why it can forward to stage 3 when there are still several serious issues like this. It is said "TC39 have the consensus" on all issues, but obviously some never agree, or may change their mind later. So how to deal with such situation?

I'm not sure what benefit we will get by continuing to discuss the issue.

Yes. Without fixing process first, we are all wasting our time.

@littledan
Copy link
Member

I think the way to address these situations in the future is to discuss the issues at Stage 2 and come to a conclusion before reaching Stage 3. See the explainer for a summary of the Set vs Define debate.

@mbrowne
Copy link

mbrowne commented Jan 4, 2019

@littledan Thanks for the adding the summary to the explainer/readme. For stage 2 proposals in general, where is the possible schedule for advancement to stage 3 announced?

BTW, I doubt that most developers (including myself until recently) realize that stage 3 basically means all syntactic and semantic details are final and not open to reconsideration unless some very major and unforeseen problem crops up. Even with this description of what "Acceptance signifies" from https://tc39.github.io/process-document/:

The solution is complete and no further work is possible without implementation experience, significant usage and external feedback.

...it still sounds like if there were a lot of negative community feedback on a particular issue, that the design decision could be reconsidered even if someone had pointed out the issue already and the committee was aware of it at the time they moved the proposal to stage 3 (especially if the volume of feedback and the negative impact were much greater than anticipated). But what I'm hearing from you and other committee members is that only new and unforeseen issues would have any sway. I actually don't think that there will be some huge negative reaction on this particular issue (for one thing, it's an edge case) and maybe it will even be more positive than negative; I'm just describing a hypothetical situation here to understand the process.

@mbrowne
Copy link

mbrowne commented Jan 4, 2019

P.S. At the very least, I think the process document should be updated to indicate that stage 3 means shipping in browsers and/or node.js (at least 2 implementations), not behind an experimental flag.

@littledan
Copy link
Member

@mbrowne This is a really good point. We've discussed many process clarifications like that, but were unable to get consensus on them, unfortunately. We're working on improving our supplementary documentation on how the stage process works in practice, but this documentation is still under internal review and not released yet. I'll be happy to get your feedback on the document when it's released.

@littledan
Copy link
Member

Closing as per #176 (comment) .

@rbuckton
Copy link
Author

rbuckton commented Oct 2, 2019

I think the way to address these situations in the future is to discuss the issues at Stage 2 and come to a conclusion before reaching Stage 3. See the explainer for a summary of the Set vs Define debate.

I don't agree with the decision to close this issue. The point of Stage 3 is to implement, experiment, and determine if there are bugs or issues with the semantics of the proposal. I don't believe this issue was discovered or discussed during Stage 2, and until the feature is officially part of the spec we should be open to discussing possible issues with the implementation, along with possible resolutions.

If it is truly too late to make a change in semantics, then I would propose adding a restriction that would result in a runtime error during ClassDefinitionEvaluation when a subclass attempts shadow a field defined on a superclass with an accessor, as this will never work and will catch users by surprise.

@rdking
Copy link

rdking commented Oct 3, 2019

It's fairly easy to claim that this particular issue wasn't accepted even by the developers that TC39 mentions when they claim wide acceptance of this proposal, as back then babel and TypeScript used [[Set]] semantics. Even though that's obviously not the intention, it does feel a bit like "bait-and-switch".

@mbrowne
Copy link

mbrowne commented Oct 3, 2019

@rbuckton I don't agree with the decision either, but if I recall correctly this is inaccurate:

I don't believe this issue was discovered or discussed during Stage 2

(It's probably in the meeting notes from when this issue was still being debated by the committee, if anyone cares to dig it up.)

To reiterate from @littledan's earlier comment in this thread:

I think TC39 was thinking about this sort of case when it made the Define call.

@syg
Copy link

syg commented Oct 3, 2019

I don't believe this issue was discovered or discussed during Stage 2

@mbrowne's recollection is correct. This belief is incorrect. Set-vs-Define was discussed at length.

@rdking
Copy link

rdking commented Oct 3, 2019

One simple question: After the direction settled on Define, was sufficient time given to babel trials to ensure a reasonable level of acceptance from the public?

@nicolo-ribaudo
Copy link
Member

nicolo-ribaudo commented Oct 3, 2019

I don't know what is considered "sufficient time", but:

  1. [[Define]] has been opt-in since 6.19.0 (2016-11-16)
  2. [[Define]] was enabled by default in 7.0.0-alpha.20 (2017-08-30)
  3. The first stable version with [[Define]] enabled by default is 7.0.0 (2018-08-28)

@rbuckton
Copy link
Author

rbuckton commented Oct 3, 2019

I don't believe this issue was discovered or discussed during Stage 2

@mbrowne's recollection is correct. This belief is incorrect. Set-vs-Define was discussed at length.

I'm not talking about Set-vs-Define in general, I'm talking about the specific subclassing concern described in this issue: A field in Base can shadow an accessor in Derived, breaking user expectations of inheritance. I raised this issue when I discovered the inconsistency after fields had reached Stage 3, I do not recall this specific issue coming up during Stage 2.

@rbuckton
Copy link
Author

rbuckton commented Oct 3, 2019

To be clear, the goal of this issue was not to rehash Set vs Define, but to point out an inconsistency in user expectations vs the reality of the spec text.

Perhaps it was a mistake to include a proposed solution in the original issue description above. I have a very strong concern that the specified semantics could be a source of confusion for users. As I said in #176 (comment), if the proposed solution (splitting the declaration into a Define and a Set) is not feasible, perhaps it would be better to do our best to error on this specific case so as to avoid a potential footgun.

To clarify, given the following code:

class Base {
  x = 1;
}
class Derived extends Base {
  get x() {  }
  set x(value) { ... }
}

A user might assume that the declaration in Derived would shadow the declaration in Base, which is the case for all other class elements. However, in the proposed spec text, the declaration of x in Base instead shadows the declaration in Derived. This behavior is likely not what the user intended, as it does not align with the behavior of any other class elements.

Barring a solution that might make this work as expected, I would at least propose that we make this an error during ClassDeclarationEvaluation. If the user's intent was to shadow x in Base with an accessor in Derived, than can still achieve that using Object.defineProperty in the constructor of Derived. If the user's intent was to add an unused accessor to Derived that will be shadowed by Base, they can still achieve that using Object.defineProperty against Derived.prototype. Either of these cases is unlikely, but there would at least be an escape-hatch for you to introduce this behavior. However, avoiding the footgun for the case where the meaning is ambiguous would be preferred to doing nothing at all.

@littledan
Copy link
Member

@rbuckton In general, it's possible to extend any constructor, not just classes. And the class hierarchy may be dynamically manipulated at different levels. We've specifically tried, in the class fields proposal, to respect these manipulations. What do you imagine the semantics of this check would be?

@hax
Copy link
Member

hax commented Oct 3, 2019

As TC39 representative of 360, I have to make it clear that 360 decide to block class fields proposal to stage 4, and this issue is one of the important concern.

@rbuckton
Copy link
Author

rbuckton commented Oct 3, 2019

Roughly this:

  • A class should track the names of its public instance fields in some fashion, such as in a private slot on the constructor.
  • If the class has an extends clause, concatenate the list of field names of this class, the extends clause, and each prototype in its prototype chain.
    • If a value does not have a private slot with the list of fields, ignore it and continue with its prototype.
  • When defining prototype members that are not fields (i.e. methods/accessors), if the name of the member is present in this list throw an error.
    • Fields are excluded from this list as defining a field in a Derived class that shadows a field in a Base class with the same name will have the expected outcome, namely that the field from Derived will be the final value of that field on the instance.
  • No special affordance needs to be made if the base was not declared as a class (i.e. was declared as a function instead), nor any other prototype in its prototype chain that is not a class.
  • No special affordance needs to be made if the constructor of the base class contains defineProperty calls.
  • No special affordance needs to be made if the constructor of the derived class contains defineProperty calls.
  • No further checking is done after evaluation completes, i.e. a developer can chose to call setPrototype on the class or any superclass and no further checking is performed.

These last four conditions are indicative of custom inheritance solutions that are outside the scope of this restriction (i.e. they are still available as workarounds for specific situations).

The goal is to provide an early warning to a developer that something is wrong. If the developer decides to use more advanced APIs to manipulate the prototype of the class, the prototype chain of the class constructor, or the instance returned by the constructor, they are still able to do so.

@rbuckton
Copy link
Author

rbuckton commented Oct 4, 2019

My suggestion doesn't maintain the list of fields from the prototype chain indefinitely, only during the course of ClassDeclarationEvaluation. Once the class declaration has evaluated, we throw this information away. The only thing that would be maintained is each individual class would carry a list of its own fields (which it effectively must do anyways for field initialization to occur during construction). I'm suggesting we leverage this existing knowledge to throw an error to avoid a footgun.

@mbrowne
Copy link

mbrowne commented Oct 4, 2019

Using method = this.method.bind(this) as a field declaration might not be as good of a practice as doing it in the constructor, but it's a lot better than using an arrow function (see https://github.com/mbrowne/bound-decorator/blob/master/MOTIVATION.md), and it should definitely be supported. But I think @rbuckton's suggestion of throwing an error for this case is a great idea, and it seems like it should be possible to work around this issue.

A bit off-topic, but it would be great if an error could be thrown for this case as well:

class Parent {
    x = 1
}
class Sub extends Parent {
    x
}

Yes it could be a linting rule, but why should the language allow usage in the first place that never serves a useful purpose and will only cause confusion? To deliberately override the default value and set it to undefined, it would always be better to be explicit and do x = undefined.

No need to reply to this part of my comment—I don't want to sidetrack this thread—just wanted to mention it as something to think about and possibly discuss in the future. And also to express my agreement with the idea of an error message to help prevent foot-guns (the topic of this thread being a much more important foot-gun to address).

@hax
Copy link
Member

hax commented Oct 4, 2019

Yes it could be a linting rule

@mbrowne Neither your issue nor the specific issue @rbuckton raised in this thread can be solved by linter, because base class could be in a separate package and out of your control. Linters normally work per-module and it's impractical to go through deep dependencies and scan all source code for an edge case like this. Even TypeScript can not help u, because it rely on *.d.ts to provide type info and currently there is no way to differentiate accessors with data property in TS interface. And I feel adding such subtle info into TS type system and emit to *.d.ts is meaningless for most usage and only increase the burden of all maintainers and users.

@mbrowne
Copy link

mbrowne commented Oct 4, 2019

@hax Yes, linters have limitations. I think that linting just one's own codebase for cases like these could still be quite valuable, but now that you mention it, it could be problematic for mono-repos and not just third-party packages.

@littledan
Copy link
Member

littledan commented Oct 4, 2019

These checks seem pretty fragile and not the kind of thing I would expect to see added to the main ES language. Three issues that come to mind:

  • If you don't want to maintain the original slot list, then you have to do an observable (through Proxy) walk of the prototype chain during class definition.
  • Fields added through return override will still face the hazard.
  • If a class is wrapped by a function (e.g., for the call constructor decorator pattern), these protections might fall through if the wrapper is not implemented just the right way.

I can think of various ad-hoc ways through all of these, but it just feels like digging ourselves deeper into a hole of more complexity.

Even if single-file linters may have trouble with this enforcement, type systems should be able to do a good job.

@rdking
Copy link

rdking commented Oct 4, 2019

@littledan There's a way to do this without all the complexity. Following your bullet points:

  • There's no need for temporary the slot list in the first place. Since each class must always be aware of its own lexically declared fields, and must likewise be aware of its base class if any, then just make a linked list of the fields. In this way, reading the list for some class C would effectively give you "own fields" for C and "in fields" for C's ancestors.

  • The return override trick cannot add fields. It can add instance properties, but that doesn't matter since this is post-definition processing. If I understood @rbuckton's idea, he's only considering doing this kind of screening while processing the class definition, not at any later point like creating an instance of the class.

  • Wrapping a class with a function is again post-definition processing. This is beyond the scope that he's suggesting we check.

I'm willing to bet that almost (if not all) of the ideas you can think of involve some form of post-definition processing. If you drop all ideas that come in after-the-fact, the complexity of this idea falls off sharply.

@littledan
Copy link
Member

@rdking To be concrete, could you describe the algorithm you have in mind? Then, I can clarify with code samples to show the issues I am concerned about.

@rdking
Copy link

rdking commented Oct 4, 2019

It's not very different from what @rbuckton originally stated. Let me describe it this way. Given 2 classes such that class B extends A {...}, then:

  • During the parsing of any class, a list of records about declared fields is created (presumably, this is already happening). Each record should contain:
    • field name
    • field type (private or not)
  • The list should also include a reference to the corresponding list for the class' ancestor
  • During the parsing of any class, should an uninitialized non-private field, or any prototype property try to bind a name that exists as a non-private field in an ancestor's field records, throw.

Since all of this only occurs during the evaluation of a class statement, does not require the creation of extra lists that shouldn't already exist in this proposal, and does not have any effect whatsoever after completion of the evaluation, there's not much in the way of complexity. The most complex part is walking the ancestral field linked list to verify that a given field or property name doesn't conflict.

@littledan
Copy link
Member

@rdking @rbuckton Thanks for making the algorithm clear. I'm still not quite convinced that it's likely that people will construct subclasses where they aren't thinking about these sorts of name overlaps, but I do think that this kind of check would be impractical to make work consistently. Here are some cases where it won't quite kick in.

Return override

For example, you could have a parent class which is a factory, returning an instance of another class, and then you could subclass that factory. The check would not be able to understand the overlap for the object returned from super().

class B { x; }
class A { constructor() { return new B() } }
class C extends A { x() { } }  // fails to throw, but the field shadows the method

We specifically decided to support return override in conjunction with class fields, so that these fields could be used flexibly, for example, with HTML custom elements.

Intervening ES5-style subclass

If the check is based on lists that are linked together based on ES6 classes, then an ES5-style class in the hierarchy will break the checks, since the function will not have the internal slot linking it to the parent class. How could it? There was no extends clause.

class A { x; }
function B(...args) { return Reflect.construct(A, args, new.target); }
B.__proto__ = A;
B.prototype.__proto__ = A.prototype;
class C extends A { x() { } }  // fails to throw, but the field shadows the method

ES6 classes were always designed to interact transparently with ES5-style classes (even if it wasn't possible to use new.target properly before). It would be unfortunate to start cutting holes in that transparency.

Mutating parent class

If the check is based on the parent class when it was declared, as in then the check will not reflect any further mutations.

class A { }
class B extends A { }
class C { x; }
B.__proto__ = C;
B.prototype.__proto__ = C.prototype;
class D extends B { x() { } }  // fails to throw, but the field shadows the method

Note that both public and private fields are created on the instance based on the prototype chain at the time the class is instantiated, not the extends clause when it was defined. This decision was made in order to maintain a correspondence with internal slots, where a subclass may be dynamically reparented to a built-in class with internal slots, and these slots will be created on instantiation.

Pulling it together

There are other variants on the above checks that could get around the issues noted above, but instead cause new issues. This includes doing a prototype chain walk on subclassing (observable on Proxies, fragile with ES5-style subclassing, and still brittle with after-the-fact prototype chain manipulation) or Has checks during instantiation rather than subclassing (which would significantly complicate the operations observed by Proxies, and add a large and rather incongruous mental space in the object model). I'm happy to discuss these further if you want, but this post is getting pretty long already.

Overall, do these edge cases matter much? I'm not sure if they matter much from the perspective of ordinary programming. But when designing the semantics of JavaScript in TC39, we tend to emphasize the value of sound, consistent semantics: we aim for a language with a comprehensible mental model, down to the details. Checks in the language tend to be simple, consistent and strong. This is a weak check--it doesn't really prove anything in particular. I'd prefer to keep such checks in tools, which can evolve practical, unsound strategies more flexibly than we can in the language itself.

@mbrowne
Copy link

mbrowne commented Oct 7, 2019

@littledan

This is a weak check--it doesn't really prove anything in particular. I'd prefer to keep such checks in tools, which can evolve practical, unsound strategies more flexibly than we can in the language itself.

Fair point, but what is the actual downside of a check that doesn't cover 100% of edge cases, as long as it works for standard usage and best practices (or at least what's the best practice 99% of the time) of class-based programming? If any programmer is using the absence of errors from the JS engine alone as proof that their code is production-ready, then there's a much bigger problem... The counterpoint to your argument is that preventing a footgun 95% of the time is better than not preventing it at all.

Another factor to consider is the complexity of implementing this check in linting tools / type systems, given that many NPM packages have already been built using Set semantics. It looks like TypeScript is now planning to address this, but that only helps with packages that will be rebuilt with the latest version of TS, and also doesn't address non-TS packages built with older versions of Babel (or the current Babel version with loose mode enabled).

However, regardless of what the language itself does, the fact remains that some sort of linting solution, or more likely a type system solution is still needed, because most people are still using transpilers for class fields, and obviously a validation check in the language itself won't help unless using native fields.

@rbuckton
Copy link
Author

rbuckton commented Oct 7, 2019

@littledan all of the issues you mentioned I specifically called out as advanced inheritance features that this recommendation explicitly isn't intended to cover. This is purely about the simplest case of avoiding a foot gun in the declaration. I don't want to prevent advanced cases such as return overrides, etc.

My suggestion is intended to find a solution to a flaw in the design of class fields without throwing out the whole feature, as some seem interested in doing. I equate this almost to the requirement for parens to disambiguate ?? and || in the nullish coalescing spec. It is generally unnecessary, but exists solely to handle a possible footgun. I'd rather not have this simple check for class fields, but strongly feel it is necessary due to the design of the feature.

There are a number of ways class fields could have been designed to avoid this subclassing flaw. Since the champions are opposed to those changes, this is the only way I can see to help users avoid this flaw in the design.

@rdking
Copy link

rdking commented Oct 7, 2019

I don't want to throw out the whole feature either. class needs support for data declarations. That's a must. What I would prefer is for this proposal to experience a redesign that doesn't incur so many "trade-offs" that break the fundamentals of what a class should do or can be. If this proposal experienced an overhaul as is happening for decorators, then checks like this one wouldn't be necessary.

@bakkot
Copy link
Contributor

bakkot commented Oct 7, 2019

I equate this almost to the requirement for parens to disambiguate ?? and || in the nullish coalescing spec.

I don't think this is nearly as big of a footgun as that was, and the price for this proposed fix is much higher: it significantly complicates the semantics, prevents writing otherwise reasonable code, and requires a runtime cost paid by every user of every website using class fields.

I personally don't think this is a good idea. I don't regard this behavior as a flaw - there's not really any obviously correct semantics for mixing fields and methods of the same name across the inheritance hierarchy, and the behavior this proposal chooses is not particularly unreasonable. I also don't think there's much appetite in the committee to change the behavior of class fields here, especially with this sort of ad-hoc patch. We discussed cases like those in this thread during the [[Define]] vs [[Set]] debate and still decided to go with [[Define]].

@mbrowne
Copy link

mbrowne commented Oct 7, 2019

@bakkot

requires a runtime cost paid by every user of every website using class fields

I assumed that the runtime cost would be negligible, because the check would be happening at a time in the evaluation process when all the fields have to be evaluated anyway, and there would be no need to run it again after the class was created. Are you saying that it would be enough of a performance cost that it would be noticeable in realistic benchmarks?

@rbuckton
Copy link
Author

rbuckton commented Oct 7, 2019

But when designing the semantics of JavaScript in TC39, we tend to emphasize the value of sound, consistent semantics: we aim for a language with a comprehensible mental model, down to the details.

I find that this inconsistency in inheritance semantics fails this goal. The semantics of class fields compared to the existing inheritance model for methods/accessors are inconsistent and don't align with the mental model of either OO developers coming from other languages, nor existing JavaScript developers who are already familiar with class inheritance semantics in JavaScript. In most other OO languages, fields and other members cannot have colliding names. While I understand that JavaScript is not one of those languages, and doesn't need to be, that doesn't mean that we shouldn't consider finding a solution.

My suggestion to restrict member names during class declaration evaluation is intentionally weak, so as to allow developers to be able to "work around" this behavior in advanced scenarios. A stronger approach would be to have field initialization throw if an existing member is present during instance creation. That would result in far more reliable and consistent semantics, but would be significantly more restrictive than the "weak" check: Subclassing and "overriding" a field with another field would become an error, field declarations like method = this.method.bind(this); would become runtime errors, etc.

There may be other solutions we haven't considered yet, and I'm open to discussing them, but I still feel that this is a significant enough footgun for new users that we should investigate some mechanism of informing the user they're doing something wrong. If we are unable to fix this, we in effect push the responsibility for this issue to the educators, sites like MDN or StackOverflow, linters that may not have access to the source code of your superclass, or type systems like TypeScript that can still fail to report an error if the type system isn't provided enough context. The problem with passing on the responsibility is that "you don't know what you don't know": hobbyist and experienced developers alike could run afoul of this inconsistency in unexpected ways if they haven't extensively researched the obscure corner-cases of this feature. The most reliable place to surface this is in the language itself.

@ljharb
Copy link
Member

ljharb commented Oct 7, 2019

method = this.method.bind(this) is a common pattern, and a best practice in React class components, so we definitely wouldn't want to obstruct that.

@rbuckton
Copy link
Author

rbuckton commented Oct 8, 2019

If we went with the stronger check, then those would have to be written as this.method = this.method.bind(this); in the constructor instead. The fact that method = this.method.bind(this); works even if you subclass and override method is exactly the kind of example that will eventually lead to the footgun I am concerned about, as it encourages users to define fields and members with the same name. I would strongly discourage using this pattern.

@ljharb
Copy link
Member

ljharb commented Oct 8, 2019

Yet your discouragement aside, it’s already an established best practice in the react community, as is “nothing should go in the constructor that can go in a field instead”.

@mbrowne
Copy link

mbrowne commented Oct 8, 2019

Using method = this.method.bind(this) in a class property declaration (or to make it more concise, using a decorator that does the same thing) works perfectly well for React components, but it's worth noting that React component classes very rarely extend from anything other than the base React.Component class, and the use of inheritance as a code reuse mechanism is very much discouraged in React. So yes, the ability to create bound methods in class field declarations needs to be supported, but a new rule that only affects overriding a field in a parent class with a setter wouldn't really affect React component authors/maintainers.

@rdking
Copy link

rdking commented Oct 8, 2019

@rbuckton That's one of the reasons why I proposed the idea of having the class declaration behave during evaluation as though it's a single namespace. Duplicate names within a single class definition should be readily discouraged as it leads to foot-guns equally as bad as the prototype/object foot-gun TC39 tried so hard to avoid with "fields".

@rdking
Copy link

rdking commented Oct 8, 2019

@ljharb That's great for the React community, given their shallow inheritance from a single, fixed source. Can you honestly say that this "established best practice" is also equally good for other projects with differing constraints?

@ljharb
Copy link
Member

ljharb commented Oct 8, 2019

That’s not what I’m saying; I’m saying that because it’s a best practice in common use, other projects with differing constraints still have to operate in a world where that practice exists and has first-class language support.

@rdking
Copy link

rdking commented Oct 8, 2019

@ljharb You didn't answer the question. Nice dodge though. However, in a world with multiple conflicting sets of best practice in varying degrees of "common use", is it better to side with just one? Or is it better to remain neutral, and allow the developer to adopt the best practice that suits their needs?

@ljharb
Copy link
Member

ljharb commented Oct 8, 2019

I did answer it: of course i can’t say that. I’m saying that the answer is irrelevant.

It’s impossible for language design to be neutral - choices always have to be made, and “do nothing” is its own choice with its own tradeoffs.

@rdking
Copy link

rdking commented Oct 8, 2019

I did answer it:

No, you implied it. I prefer not to need to infer the answer when I ask a question. Thank you for stating it clearly this time.

It’s impossible for language design to be neutral - choices always have to be made, and “do nothing” is its own choice with its own tradeoffs.

Yet another misconception. Being neutral doesn't mean doing nothing. It means supporting all paths that can reasonably be supported. It is possible (in many different ways) to support both [[Set]] & [[Define]] semantics simultaneously. It is possible (in many different ways) to support both prototype and instance-based default data declaratively. It is possible (in a few different ways) to support Proxy in conjunction with private data. The point of the matter is that none of these paths were taken, and instead we're looking at a set of "trade-offs" that from the choices, actually serve to hinder (if not subvert) some of the core principles of a class.

I'm 100% certain of what I've just said as I've written several modules experimenting with such behavior and managed to get it to work for all cases except 1 where a Proxy invariant kept me from displacing the effect of Object methods that invoke preventExtensions (something that would not be an issue from inside the engine).

@trusktr
Copy link

trusktr commented Aug 10, 2020

Just FYI, TypeScript nowadays throws errors for the cases @rbuckton mentioned,

but only when useDefineForClassFields is set to true. playground example (I happily have useDefineForClassFields always set to false, but it means I must unfortunately rely on a build step.)

@rbuckton and @rdking have a higher number of votes in their replies compared to replies in opposition (which mostly have no votes). I wish those votes counted.

Even if single-file linters may have trouble with this enforcement, type systems should be able to do a good job.

What about just keeping the web easy-to-use and not requiring any build tools? Let's not forget about people who want to simply edit an HTML file and get to making something awesome.

I would vote to revoke the existing class-fields behavior and switch to [[Set]], even if that breaks my applications (and those of others). I believe the long-term benefit would far outweigh the temporary breakage.

Browsers can log prominent yellow or red messages to the console for a number of months, then finally make the flip.

@ljharb Speaking of what people do in practice (like you mentioned with React), what about Angular users? Have you forgotten about them? They've been using decorators, and now those don't work with class-field [[Define]] semantics. That seems to go against your philosophy of siding with existing real-world "best practices". (Unless you particularly have unfair bias towards React winning and therefore purposefully don't tell the other side of the story? I'd like to believe that isn't the case.)

@ljharb
Copy link
Member

ljharb commented Aug 10, 2020

@trusktr i'm not familiar enough with angular to answer; but i'm very skeptical that Set vs Define actually matters to the vast majority of use cases. If you have evidence for that (not just, "one library breaks", but "lots of code patterns that work with Set don't work with Define"), then i'd file a new issue providing that information, as I'm sure champions and delegates would be very interested to consider it.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests