-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make non-writable prototype properties not prevent assigning to instance #1307
Conversation
…se on the receiver
This patch looks to me like it would do the trick. At the risk of bikeshedding on details too much, I wonder if it would be a little cleaner to check existingDescriptor's |
At a high level, I like the idea of going in this direction. However, let's make sure to get some data about web compatibility before asking engines to ship the semantics. |
@littledan this patch should be using We cannot use |
Please review and reference the multiple times in the past when this issue has been discussed within TC39 and was tabled. Explain what has changed or is different now that justifies reopening this discussion. This is just good hygiene for a standards committees. A couple thoughts: It is much more likely that behaviors that terminate in a thrown exception can be changed without breaking existing applications. (No guarantees, but the odds are better). TC39 traditionally haven't worried very much about substituting some new behavior for a throw. The behavior you are concerned about currently throws in strict mode code. This suggests that a change that only applied in strict mode might be plausible. Because of the spec's intentional factoring of behavior between a MOP and syntax linked evaluation actions, [[Set]] does not know whether or not it is being invoked from a strict mode operation. So it would not be possible to make a strict mode only change via the specification of ordinary [[Set]]. Instead, what you would probably have to do is make the change in step 6.3 of PutValue. If [[Set]] returns false for a is a strict mode PutValue where the thisValue is an extensible object that does not have the corresponding own property then you could create and initialize an own data property using [[DefineOwnProperty]]. |
@allenwb if you have discrete conversations in the past of this that you want to discuss we can do that. Some quick searching on https://esdiscuss.org/ didn't show anything of mention. #1154 exists on ecma262 but is the only reference of this in the past I found on some quick searching that tries to discuss the issue. It also doesn't have a concrete solution proposed. The stalled nature and this being a concrete change I am proposing let make me think it appropriate to open up a new PR. In addition, we should not do this in PutValue after some review. In particular because: X = {get f() {}}
X.f = 1; Performs a PutValue and returns false for Per moving from error to other behavior, I agree it is observable but many changes including adding methods to objects also can move from errors to non-errors. This is unlikely to be controversial in nature given history. Per moving from noop to other behavior, I am suspect that any code intentionally relies on a [[Set]] failing to do anything and testing to ensure that no change occurred. Gathering data through means such as usage counters would give us some actual data rather than hand waving. In addition, code in ES Modules and class bodies always throw, so this is a relevant breakage but generally going to be hard to rely on the behavior remaining a noop as code migrates slowly to newer features. Per why this is more relevant than it was in the past. We have had a presentation about freezing primordials and security problems of their mutability both from Mark and from Natalie within this past year. I would also state the landing of class fields is a prime example of a define operation being introduced due at least in part due to this behavioral discontinuity and it being untenable to perform [[Set]] on prototypes if they could be frozen as the spec stands today. Since the issue I mentioned about on the spec was also this year, it does not seem like actual proposals to fix this have come up much. |
It would also be helpful to understand the end user value that this change will create (taking into account the fact that users have had time to "deal with" the current semantics). |
@zenparsing I have seen two ways users "deal with" the current behavior:
The possibility of the accessor is why this had not been taken seriously by tc39 in the past. Caja in fact has been using this accessor trick in production at scale for many years https://github.com/google/caja/blob/master/src/com/google/caja/ses/repairES5.js#L388 successfully. However, since then Agoric and Salesforce have reengineered the core mechanisms of Caja/SES into the Realms Shim https://github.com/tc39/proposal-realms/tree/master/shim and modern SES https://github.com/Agoric/SES to be much more efficient than the old Caja/SES. Salesforce has deployed this in production at massive scale, and find that the overhead of [[Get]]ing from the getter causes the painful significant slowdown compared to normal JS. |
So to be explicit, the values to the end user:
|
Excellent, thank you @erights . |
You're making the proposal, so it's basically your job to adequately research it. You should take my word for it that it is been discussed multiple times going back to ES5 development. I'm happy, to give you some pointers of where you need to look, but I've done this sort of tedious search too many times to volunteer to do it again for somebody else's proposal. (BTW, this is a great example, of why the TC30 archives needs to be better curated and that a technical topic index (at least of meeting notes and decisions) would be particularly helpful). I would suggest reviewing meeting notes as well as a deeper review of ES-discuss/es5-discuss. Note that some technical meeting notes prior to May 2012 are only attached to formal minutes on the Ecma members file server and others only occur as messages on es-discuss or archives of the old ecma tc39 email reflector. There are also presentations and proposal docs (not necessarily on this topic??) that only exist on the Ecma members site. Also see the bugs.ecmascript.org archive and the wiki.ecmascript.org as archived at archive.org. My recollection was that I was recently searching for something in the 2008-2012 timeframe and stumbled across something about "the over-ride bug" and thinking "I bet that this is the first discussion of this issue". Unfortunately, I don't think I captured a link as it wasn't the topic I was hunting at the time. Whereever you look Mark Miller is going to be part of the discussion as he is the person who has consistently been involved in raising the issue.
Exactly because, is past discussions it has never made it past the "this change is too likely to break the web" stage. You need a really strong case to get past that hurtle. IE, Browser Game Theory
My suggestion is just a sketch of a possible path that is less likely to run into objections. WRT accessor properties, I would suggest expanding the predicate it suggest for PutValue 3.2 so it doesn't do it for accessor property over-rides. |
See http://web.archive.org/web/20141230041441/http://wiki.ecmascript.org/doku.php?id=strawman:fixing_override_mistake and the links at the bottom to other discussions. Repeating here in full @allenwb 's rationale for not fixing the override mistake, so that we can respond to it here:
|
Another approach is to pressure engine implementors to but more effort in optimizing such accessors. I doesn't seen any reason why in vey hot loops over objects with stable shapes that such accesses couldn't be specialized and inlined. That's sort of the whole history of JS engines, features remain slow until they get enough usage that implementors are pressured into optimizations. |
@allenwb even if we make accessors fast, it does not fix the underlying problem for data properties, which are what the JS spec uses for primordials. Nor does it fix the problem for the general usage of objects seeking solely to prevent mutation of themselves in userland such as freezing Object literals. |
BTW, I should mention that I think the linked inheritance behavior of get/set accessor parts was also a mistake. It means that somebody who just wants to over-ride one half of an accessor (let's say, disable the set behavior while preserving the get behavior) has to remember to over-ride the other half with an explicit super call. I don't know if this has any security implications but I suspect it is a bigger bug farms then the read-only over-ride issue. |
hence the suggestion make the change for strict mode data properties. |
Following up to @erights excellent response to @zenparsing 's request for end user benefits, this change directly benefits a use-case we have in production today: multiple applications that share the same realm for resource efficiency reasons. In the use-case, it is still desirable to reduce interaction between the separate applications. You can have a consensual agreement that both apps will not mutate globals, but for real robustness you need to deep freeze the global object. We do that today in development (using the accessor replacement approach), but the performance degradation is too much to accept in production. This change would promote the ability to run guaranteed-global-mutation-free code in production. |
I second Allen's points here. We've had a number of discussions about this that need to be put into context here. |
We certainly have had a number of discussions, because I keep raising it, Allen keeps objecting. I think Allen's points are wrong and he no doubt thinks mine are. Because the accessor trick was working well enough for Caja/SES and I had many higher priority fights, I never pursued it to exhaustion. We certainly never achieved consensus not to fix the override mistake. I do not recall anyone making arguments not to fix it besides Allen, except that Brendan would occasionally also voice concern about compat. All the points I remember anyone making, that we should not fix the override mistake, are well represented by Allen's old text that I quote above. Every time I brought it up again, and Allen argued against it again, I remember having a distinct impression that neither of us were saying anything new. So, yes, everyone please research this. Please follow the links from my old message: http://web.archive.org/web/20130731082322/http://hg.ecmascript.org/tests/test262/file/c84161250e66/test/suite/chapter15/15.2/15.2.3/15.2.3.6/15.2.3.6-4-405.js I acknowledge that the last ends with my footnote:
So, yes, we can learn something new by diving into this history. Apparently, in 2011 I agreed with Allen. I do not remember this and am surprised. Perhaps there are other surprises. But we should not imagine that we will ever do exhaustive research on the history of any of our 9-year long controversies. From my memory, Allen's old points, Allen essentially reiterating a subset of these points here without access to his old writings, further research is a nice-to-have, not a requirement for proceeding. What is a requirement: Well articulated and compelling answers to each of the points Allen raises in that old page and anything he cares to add here. |
My primary concern is not whether the current behavior is good or bad from a language design perspective or whether the current design choice was intentional or a mistake. I really don't care about those aspects of this issue. I do, however, think the choice of words we use to describe the current behavior ("mistake", "bug", "design choice", etc) can shade and distract (either intentionally or unintentionally) from the discussion of the aspect of this issue that I think is very important. That aspect is what do we mean when we say "Don't break the web" and how does that shape how we approach issues like this one. To me, "Don't break the web" doesn't just mean we don't want to suddenly break the Google or Amazon home page. Or the any of the current top X thousand web sites. To me, it is a commitment that we will treat the web and how it is accessed via browsers (or at least the parts of browsers that TC39 has some influence over) as an archive of human digital activity and knowledge. That means we not only want to keep Facebook working. We also want that silly game page that a teenager created in 2012 to still work when that teenager's grandchild loads that page for the first time 60 years later in 2082. We don't know what 100's of millions of web content creators have done between 1993 and now. We don't know what obscure JavaScript behavior their work may depend upon. We don't know what is sitting, currently unaccessed, on millions of servers. We don't know what is in various web archives. We don't know what is archived in server backups. We don't know what will be of interest to somebody in 5, 50, or 500 years. A corollary of "Don't break the web" is "We live with our mistakes." Our initial reaction should be to oppose any proposal that may break existing valid (according to de jure or de facto standards) JavaScript code. The longer proposed modified behavior has been such a standard the greater the opposition should be. As a new feature is just being rolled out or isn't yet ubiquitous in browsers we may decide to correct a design or specification bug. But, once something has become an established interoperable web behavior the bar to changing it becomes extremely high. Even if the behavior was a mistake. We need to live with and work around past mistakes, not fix them. For this issue, I've already suggested two possible work-arounds. The first one is to change the current exception throwing behavior in strict mode. Changing an exception to a non-exception is a change that is often non-breaking for existing code, although it still has to be carefully thought through. The other work-around is to lobby engine implementors to improve the performance of the accessor property pattern that can be used for the use case that is motivating the change. Here's another possible fix. Add it new property attribute "overridable" (defaults to false, like all the attributes) that means allows [[Set]] to create an own property to over-ride an inherited non-writable data property. Is this issue important enough to justify that level of change to the specification? I don't know. But if this issue important enough to justify possibly breaking some unknown part of the web then it must be pretty important. The real point is we don't take the easy way out. We need to assume that we must "live with our mistakes" and search very hard for alternatives to breaking changes. Only after doing that and for the most extreme issues should we ever consider "breaking the web". |
As a Javascript developer I'm very happy that TC39 is very carefully considering improving things, but at the same time I see a contrast between the current effort and the lack of effort for fixing fresh bugs: #666 (more than 2 years old though, maybe not so fresh...). |
I completely disagree here. We have shipped breaking changes in the past, and the current behavior is unreliable in either case due to discontinuity as pointed to both in the slide and the division of strict mode/es modules/class bodies vs sloppy mode. We are needing to gather the data and yes this affects millions of sites. However, we have it in our power to gather that data and make an informed decision rather than attempting to spread fear beforehand.
This mistake as you are stating is one that is problematic to all usage of writable property descriptors. It is not just a mistake, but it is only a source of bugs to my knowledge. The lack of reliability of the mechanism is compounded here. We should endeavor to make as few breaking changes as possible, but this is indeed a mistake that we should view as something to inspect. Workarounds are untenable in this case unless we introduce a new form of assignment, which I consider equally a shock to the ecosystem as we would then move to tell people to use it else their behavior for frozen/non-writable descriptors will suffer the problems listed here. If you can point to any code that relies on this behavior of sloppy noop that is of mention that would help me to understand this. Until then, we must wait for usage counters. If usage is not seen at all, and no reports are seen during the very long rollout like any other javascript feature, are you sure that such sites are wanting to use updated engines? There must be at least a consideration of the future which is bigger than the past when addressing mistakes. Even if we want to make absolutely minimal changes.
As stated above by yourself As stated above as well increasing the performance of accessors will not help us in fixing this mistake as JS uses data properties for both the majority of spec and many userland operations like function methods and properties of Object literals. Such changes are both not able to be web compatible and would not fix our mistake that is having real security implications and usability problems.
I disagree with this change unless it defaulted to true. Increasing the complexity would not help because of the exact reasons that increasing the performance of accessors would not help. We need to address the real mistake.
I strongly state it is, and it is being increasingly important as you follow things trying to use frozen classes such as PrivateName and watching them struggle; watching the security talks we are having at TC39; and watching the difficulties in justifying some designs for things like class fields which would be broken if we used
This PR is the hard way out. We have to be very careful when judging these breaking changes. Increasing performance, adding a new descriptor field, and only being viable in strict mode. All of these are the easy way out and none of them fix the mistake nor aid our users that are facing the current behavioral discontinuity and problems therein. Those solutions are not hard because they do not attempt to address the problem. It is as you said above "working around" rather than fixing. |
When I define a prototype property as unwritable, I do want assignment on instances to fail. I expect that usage where overriding is intentional, e.g. in a custom subclass, to be performed very intentionally, e.g. by using I don’t feel that strongly about this and I don’t think it’s the end of the world if it were to change. My expectations here are probably informed in part by just being accustomed to it. But it doesn’t seem like a cut-and-dry case of “fixing” to me, and I’m confused about why it would happen via a PR and not a full-fledged proposal. I thought all major changes to semantics went through a proposal process? |
@bathos not everything goes through the stage process. I don't think this PR should go through the staging process in particular because I don't think Stages 0, 1, 2, or 4 's intentions really apply to it. It is a massive undertaking to do this, but that doesn't mean it is through the staging process. If the committee wants it to go through the staging process... we could... but that would be odd to me. |
I agree with @allenwb above that web compatibility is extremely important. TC39 has made several breaking changes in the past, such as subclassable built-ins and Array.prototype.values. It was a lot of work, and resulted in a combination of spec workarounds/fixes and waiting out the death of some old websites, but we did it. Compatibility is an empirical question, not a theoretical one. Let's get UseCounters/telemetry out in the wild to start the investigation. It's interesting that no one in this thread has voiced support for the current object model, in its entirety, being optimal. I agree with @erights that past disagreement shouldn't foreclose current discussion on the issue, and would be really happy to have the help of people who have been around longer than I have to dig up these historical archives, wherever they are. (It's somehow unactionable to put the responsibility on the proponents, when it's just unclear to some of us where to look.) Even if we don't go through the stage process here, I would like to see an optimized JIT implementation and robust web compatibility feedback (e.g. shipping in an early pre-release channel--counters are not enough alone) before merging. |
Note that @bathos did express support for the current semantics. I also think that the current semantics are reasonable - I would say this issue is arguable both ways (although the current design clearly creates a bit of a nuisance for the SES use case). Looking forward to the discussion ; ) |
@bathos @zenparsing "a bit of a nuisance" is a bit of an understatement ;) While I agree that overriders should be using a defineProperty-oriented operation, like function Point(x, y) {
this.x = x;
this.y = y;
}
Point.prototype.toString = function() { return ...; }; The pervasive existence of this pattern in old code has caused most projects that had tried freezing to back off of freezing in production, leaving their users more vulnerable. The others use the accessor pattern selectively, with problems of performance, compatibility, and occasional breakage. No one uses the accessor pattern pervasively --- not even the old Caja/SES --- because the performance cost is too high, especially during initialization. |
@erights Thanks for the clear illustration / @bmeck thanks for explaining. Yes, addressing those needs does seem to have higher value than anything positive about the current semantics. I commented mostly because I was surprised by the mechanism of the possible change, but it sounds like my intuition for "what is / isn’t a proposal" probably is just not calibrated correctly. |
In parallel with the great conversation above about whether we want to go for this semantic change, let's figure out how this PR should be worded. Going back to details of this patch (to continue on this thread), it looks like I got a bit confused on a first look. In OrdinarySetWithOwnDescriptor, there are a bunch of variables flying around:
There are two lines that do writability checks:
I'm wondering if what we really want to do (if we want to make this semantic change) is simply delete 3.a. (Did I get that all right?) |
If you want to create the property on the receiver shouldn't you explicitly be using defineProperty? |
@devsnek thats up for debate, as #1307 (comment) pointed out, there are ways to do such without defineProperty that are prevalent. If we were to introduce some specialized define operator those cases would not be fixed. Since a large part of the presentations we have seen in the last year deal with freezing values and not updating all assignment operations we should look at what can be done here since a define operation being introduced would not fix anything. |
@littledan see final paragraphs of I still contend that if this breaking change was made that it should only apply to strict mode (is anyone trying to build OCAP sandboxes that allow sandboxing non-strict code?) hence OrdinarySetWithOwnDescriptor is not the right place to do it. Also, if you remove step 3.a you will be adding an additional observable (via a proxy) and redundant [[GetOwnProperty]] call on O for the base (assigning to a readonly own property) case of: let O = Object.defineProperty({p: 1}, "p", {writable:false});
O.p = -1; The purpose of 3.a is to ensure that extra [[GetOwnProperty]] call does not occur. |
The fundamental question isn't about the relative merit of the two semantics. Rather it is about whether TC39 is serious about its commitment to "don't break the web". |
This PR explicitly is because the current semantics make the intention of allowing freezing to preserve behaviors untenable in the real world. This PR is exactly about the semantics and if the current behavior is enough of a problem that we must change it. We have had multiple agenda items in the last year that mentioned the topic of freezing in a variety of ways, not all of them related to a full Object Capabilities model.
Yes, we should be serious about it. However, using phrases like that without data and in the intent to spread an idea of no breakage being allowed in the language is simply not useful. We have some data about the impact of keeping the current semantics. We also have made breaking changes in the past, and even recently renamed If we are to talk about "don't break the web" we must first decide what breaking the web entails. If there are viable alternatives to this PR, that sounds great. If there is a way to make |
In the September 2018 TC39 meeting, we discussed this PR. A high-level summary of my understanding of the discussion (please correct me if this is a misrepresentation):
|
…ng to instance TC39 is discussing making inherited non-writable properties overridable with Set, in tc39#1307. We have not yet assessed the web compatibility of the change or written tests, so this PR is not yet ready to merge, but this PR gives some concreteness to how the specification may be done, to help move the discussion forward. This patch implements the approach described in tc39#1307 (comment)
Some additional background that I found. As the thread mentioned, this behavior goes back to ES1. In fact, it was in the very first ES1 draft of January 10, 1997. The image below is a screen snap of the the relevant algorithms in that draft. The tread mentioned that that these algorithms were initially authored by people from Microsoft. It's probably worth adding that this included people with considerable background in Self and NewtonScript so it seems likely that they were fully informed about the "prototypal inheritance" motivations for this approach. It was never a "mistake", it was an intentional informed design decision. |
Intention and information are not mutually exclusive with mistakes. Just saying. :) |
Once it is fixed, we can stop worrying about what to call it ;). |
Superceded by #1320 |
The so called "override mistake" is a symptom of this.
Some slides: https://docs.google.com/presentation/d/17dji3YM5_LeMvdAD3Y3PQoXU1Mgm5e2yN_BraBSTkjo/edit