-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't do it. #31
Comments
There's a lot of interesting points here, but I just want to respond to one particular point for now:
I think it's good that we're discussing various semantic alternatives. We do this for all TC39 proposals. There are always a lot of different choices that we could make. However, I don't see this as a cause for existential concern about whether we should make a standard in this space. I'm optimistic that we can find a useful answer here because the experience in other languages and libraries shows that there's a lot of diversity of answers that work out well. Several other languages that support a decimal or rational type, and in the design process, there were various debates about their semantics. However, I don't see much evidence of programmers being tripped up by the details that were selected. (I would like to spend more time looking for these sorts of issues, though.) This is despite the fact that there's huge variability in the semantics between languages. You might expect that excluding something from the standard and letting people experiment in user-space libraries will yield a lot of different experiments, and practical experience from people choosing various camps. For this, I haven't seen much evidence. In languages like C++, Rust, JavaScript and OCaml, which lack built-in decimal types, I can find decimal/rational libraries for each, but it seems like there's one main popular one (or, in JS's case, a family of similar ones by the same author). And I don't see any particular trend of certain semantic details becoming more popular than others; it seems like the popularity has to do with other factors. Overall, I think the hard requirements for developers are much more loose than the semantic details that we're talking about nailing down here. I want to have an open discussion about the details, and take this all seriously to get a very good answer, but I also think there's a broad range of answers that will meet developers' needs. |
It has been pointed out to me that I should clarify: I am totally in favor of standardizing a library for BigDecimal. My concerns as laid out above apply specifically to adding BigDecimal as a language feature. |
@jakobkummerow To be concrete, does this mean that you'd be in favor of adding something like the BigDecimal constructor as a property of the global object, but something which produces objects manipulated with methods, rather than primitives and operator overloading and literals? |
My key point is that I'd be in favor of something that doesn't have to be built into every engine/browser because it is built from existing primitives and can be dynamically loaded when needed. As a rough sketch, think "like a module on NPM" (i.e. conveniently loadable on demand by those who need it) + "with official TC39 blessing" (i.e. stable, reliable, canonical, best-practice), details TBD. A necessary consequence of that approach is that it allows only things that can be "polyfilled", in particular no overloaded operators, and no new primitive types. The key benefit, and driving motivation, would be that the feature would be available everywhere, immediately upon release. (For more detailed reasoning and additional benefits, see my post above.) If BigDecimal serves as a driving force to finally get some momentum behind the "JS standard library" idea, that'd be a great outcome: then other, future features could benefit from that infrastructure too. |
@jakobkummerow Just to make sure I understand, is your proposal that TC39 not standardize additions to the JavaScript and built-in libraries, and move instead towards coordinating the authoring and endorsement external libraries instead, which would be bundled/downloaded/pulled in with particular applications? This is a bit different from other "JS standard library" ideas I've heard, where my understanding was that the standard library might be bundled with the JS implementation, even if it's written in JavaScript. Maybe I was misunderstanding; @msaboff @mattijs would know better than me. Separately, there's been interest in some TC39 delegates in adding operator overloading and user-defined primitive types ("value types") so that libraries can define something like BigDecimal. I'm proposing that we develop these proposals in parallel. Do you think we should be focusing on adding operator overloading and value types strictly first? |
No, I said no such thing. I have no objections to (nor even particular opinion on) TC39 being the group that standardizes a JS standard library.
Absolutely not. I am opposed to operator overloading, but that's a separate discussion.
Libraries can do that today, see #32. |
Sorry, I didn't mean to misrepresent you, but by "external" I meant not built-in to JS engines. Anyway, thanks for clearing up my misunderstandings. |
@jakobkummerow Just so I can understand better, are you suggesting that the JS standard library, standardized by TC39, be built-in to the JS engine, or be distributed separately? |
I think it would be particularly great if the design of the standard library were such that both options are viable. I.e., if there is a separately distributed/hosted version that just runs anywhere, and JS engines can, at their leisure, decide whether they want to bundle some or all of it as built-in versions for whatever reason -- in a way that's completely transparent to the running JavaScript code. To illustrate with an example: a C++ developer just writes There even are other approaches that are conceivable: a browser's/engine's various caching systems could give preferential treatment to stdlib modules, such as making them less likely to get evicted. |
That’s already the case for globals - that’s how polyfills/shims work. Builtin modules are just an alternative syntactic way to access a global feature, that conforms to a standard, by its name. |
So you're saying it's easy to spec a standard library that way -- great! Let's have one! |
I’m saying we already do have one - globals. Adding builtin modules doesn’t change that landscape, it just encourages a different way to access them. A BigDecimal module and a BigDecimal global place identical requirements on engines as to the feature’s availability and semantics. |
We clearly do not have the kind of standard library that I have described above as being one I would like to have. Maybe we never will; Dan asked what I'm suggesting so I clarified. If it does turn out that a so-called "standard library" will end up just being a minor layer of syntactic sugar over existing "landscapes", then indeed it won't solve any of the issues I'm describing. I was hoping it could be more than that, and open up further possibilities. |
Note, there's been some fairly extensive investigation (e.g., from @lukewagner) into whether we could improve caching to make sure that commonly used libraries could load faster. This effort has run into various difficulties. Most prominently, the shift towards double-key caching (by not just the location of the resource but also the origin of the referrer)--an important privacy improvement--makes it hard to get the high hit rates that one might hope for. It's unclear whether operating on a fixed set of blessed libraries might lessen that difficulty. |
Fine -- I didn't mean to get hung up on such details; as I said:
The key point that I do think would be very valuable is if the "standard library" can be transparently loaded from somewhere, so that it doesn't have to be baked into every engine. |
Any part of the language must be baked into every engine, otherwise you wouldn’t be able to write portable code that uses it. A standard that may or may not be present is one you can’t rely on without providing a fallback (and having a mechanism to do so). |
I don't think this "must" tone that @ljharb used is ideal for this kind of conversation. The word "must" makes sense in a specification that demands something, but in this case, who is doing the demanding? I'd prefer we can talk about technical judgements that we, as individuals, make, rather than an unspecified authority. However, practically speaking, I am having trouble understanding how @jakobkummerow 's idea would work. I could imagine that some non-Web engines might store this code in some kind of "classpath". For the Web, I imagine we'd either have to bless some particular URL per library to fetch from (which I believe is rather unprecedented), or pages would have to specify where to load the library from. This sort of decision would make it far from "transparent", given the requirements on the surrounding HTML to explain where the library should come from , various semantics on the Web for fetch, etc. For this reason, in addition to the broader goal of minimizing how much code is sent to the client on each page load (cc @slightlyoff), I plan to continue to pursue this proposal as something built-in to JavaScript engines, like TC39's past efforts. |
Of course, and that would be silly. What I'm talking about is a standard that would be guaranteed to be present, so you can rely on it and do not need to provide a fallback. That aside, for language features following the traditional model you totally have to feature-detect them and provide a fallback, so I don't even see how this is an argument? It's not as if adding a chapter on BigFooBar to the ES2021 spec would magically make it appear in engines. My whole idea here is to get closer to a state where new features actually can "magically" appear in engines, making their presence more reliable, not less.
I suppose technically the way it would work is that the canonical source of the library is an online repository (presumably under TC39's / ECMA's control). JS engines (or, rather, the applications embedding them) that don't expect to have web access would load it from the local file system instead. (Maybe the latter would become the default; it would still have the significant advantage that for every feature that's implemented in the library, all it takes to deploy it for a given engine is to ship a new release (not new version!) of that engine that bundles a newer version of the library, no time-consuming changes to the engine itself would be required. As I said before, the overarching goal is to help the ecosystem move forward in a more frictionless (=speedier, more consistent, more scalable) fashion.) All the "optional partial bundling" I talked about would be engines saying "ha, I recognize this import, I don't have to actually import it, I already know what to do", similar to how optimizing engines today recognize a bunch of methods on built-in objects and execute efficient custom sequences for them rather than doing actual method calls on the actual (e.g.) Of course it would be unprecedented. That's literally true for every single proposal. (Having a BigDecimal spec is totally unprecedented!)
Understood. |
@jakobkummerow Thanks for taking the time to explain. Just to make sure I understand, is the main difference between your proposal and #31 (comment) specifically about where the BigDecimal constructor lives--in a property of the global object vs a module, where we have some mechanism for loading the polyfill? Having a mapping from
I'd like to separate out the two parts here:
|
I'm afraid I don't understand this question: doesn't everything live in a property of the global object after it's been loaded? Putting If you want a very simple summary of what I'm saying, it would be this: make BigDecimal polyfillable. An even more compelling vision would be to take this direction a step further than a polyfillable language feature, and directly "standardize the polyfill", IOW make it an official library. That would unlock additional benefits:
(All of this is a rehash of what I've said above, just an attempt to summarize it differently.) |
(Just trying to understand what your proposal is, not trying to debate.)
No, for example, code in modules doesn't work like that, and module exports don't become properties of the global object.
If we're not talking about module vs global object, then is one way to think of your idea as #35 + the "standardize the polyfill" path? I'm not opposed to the "standardize the polyfill" idea as something to look into; I'd just like to separate its investigation from this proposal. (There are other issues with 100% faithful polyfills, like per-realm vs cross-realm brand checks; @domenic investigated transitioning from the latter to the former, but it seemed rather difficult in terms of maintaining conventions.) |
That's an implementation detail I don't really care about. Both are polyfillable IIUC.
Yes.
100% faithfulness is trivially achieved if the polyfill is the standard. |
Thanks for clarifying! |
I’m rather interested in a case where this would be beneficial, could someone clarify this? |
@jimmiehansson Some use cases are documented at a high level in the README, and there have been some detailed posts with further use cases in #3 . |
This discussion reminds me somewhat of Scala and its support for XML literals. Scala included XML support in the language and the core library. Over time (and debate) the XML code was removed from the core library to a community supported lib. Furthermore, it seems the core Scala language support for the feature will be removed and replaced with something else My point is some similar questions/concerns are being discussed here. |
@ShawnTalbert I think we'll just have to make a case-by-case judgement on what things we add to the standard. People seem pretty happy with JSON being part of JS, even though it's another text format like XML. Honestly, I think XML's grammar (which is quite complex, unlike the decimal literal grammar) being part of Scala's grammar, and XML going into relative decline in general (which I don't expect to happen with decimals) are big factors here, which don't apply to the decimal case. During Stage 1, let's assess whether we think decimals are a lasting need in the JS ecosystem, or are more domain-specific/served by libraries. I'd like to collect this evidence starting in #3 . [Totally irrelevant aside: I was really proud of making Factor's XML literals in the library, not core, from the beginning ;)] |
That doesn't really seem like a valid comparison given that JSON is just a safe subset of But that discussion seems like a departure from the fundamental question of whether:
|
I don't get the point you're trying to make. You don't think comparing JSON to XML is valid but have no problem comparing |
I disagree with the idea that existing globals and built-in modules function equivalently in terms of system complexity. As the number of built-in modules increases, so does the complexity of the JavaScript engine, especially in contexts like embedded systems where efficiency is crucial. By treating built-ins as importable libraries, they can either be downloaded or substituted by the engine with more efficient versions as needed. This method also clarifies the distinction between implicit and explicit dependencies. Modules that are not essential do not need to be embedded, making the engine lighter and more adaptable. Explicitly importing modules as needed instead of implicitly loading them as built-ins provides greater control over which functionalities are included, reducing unnecessary bloat and enhancing system performance. |
@drpicox on a technical level, globals and builtin modules are identically lazy-loadable in that respect - that's already how browsers work. |
TL;DR: I'm very skeptical of this proposal. I'll try to explain why below. In summary, I believe that the use cases are nicely served by user-space libraries, and libraries would avoid several drawbacks of native language features.
(1) Adding features to the language bloats the platform for everyone. Natively implemented features don't come for free. They make binaries bigger, increase memory consumption, and cost startup speed. They tend to be a little more efficient (in terms of speed and memory) than polyfill equivalents, which means they're worth it for widely used features. The fact alone that there haven't been any native Decimals in JS for >20 years proves that this is not a core use case. When non-core use cases are handled by userspace libraries, then that has the advantage that the costs are only paid by those who use them. That scales better: the world can have 10,000 features each implemented in a library, and most websites (or Node.js/Electron apps, etc) can include a few of them, and everyone's happy. Shipping native implementations of all those features to everyone would be too bloated to be practical.
(2) There's clearly a lack of clarity about intended behavior, as can be seen by the discussions on this issue tracker as well as earlier conversations. When implemented in one or more libraries, there is ample room to experiment and/or tailor for specific use cases or user preferences. With an attempt to standardize, the pressure is on to find the one way of doing things, which is needlessly restrictive and easy to get wrong. (Examples: different use cases might have different requirements around how
===
treats precision/cohorts; or they might have different requirements regarding de/serialization and interacting with other components like databases or other programming languages. Libraries allow for flexibility, language features by nature have to make restrictions.)(3) While low-level (in the sense of: close to the hardware) features can have significant performance advantages when implemented natively, this effect does not carry over to high-level features. BigInts are an example: user-space "pair of 32-bit Numbers" emulations of 64-bit integers provide quite decent performance; as of today none of the shipping native implementations even comes close to matching them. (That might change eventually, but my point is that it's not a given.) See tc39/proposal-bigint#117 for some rather impressive numbers. I don't think it would be reasonable to expect that BigDecimals would be any different.
(4) Overloading operators makes adoption hard, because it makes polyfilling for older browsers/environments effectively impossible. Again, BigInts are an example: the fact that the recently-accepted BigInt proposal overloads operators makes it very hard for developers to start using them. I recommend not to repeat this mistake. Features that don't get adopted are not very useful, no matter how concise their operator syntax is. Libraries don't have this problem: by nature of being written in terms of commonly supported language features, they run everywhere.
(5) Overloading operators has a huge implementation cost. Code complexity of binary operations increases quadratically with the number of types that the operands can have. Today's JavaScript engines typically support (small) integers, doubles, strings, oddballs (true/false/undefined/null), objects (with .valueOf/.toString functions), and BigInts. That's 6² code paths. Just because we as implementers had to grudgingly go this far doesn't imply that we would at all enjoy having to go to 7²; "you've done it once, you can do it again" is not at all a convincing argument due to this ever-increasing cost. (Implementations with smaller development teams behind them are going to feel this even more.) Implementing rarely used features as libraries, on the other hand, allows engine implementations to remain lean and efficient.
(6) Relatedly, putting features into the language creates a risk of fracturing the ecosystem when some implementations are unwilling or unable or simply lagging behind in adding support. Once again, BigInts are an example: V8 based browsers have been shipping them for a year and a half (in fresh releases, which doesn't help e.g. LTS versions of Node), Spidermonkey-based browsers started shipping them this summer (although limited to much smaller maximum sizes, creating compatibility risk), WebKit-based browsers still have no support at all (AFAIK they are hoping to ship some time next year). As an even more extreme example: to this date, some JavaScript implementations intentionally remain at the ES5 feature level, e.g. in order to save memory when running on embedded/IoT devices. In contrast, when implementing new features as a library, they can be used on any browser/environment immediately.
(7) If it turns out that performance of user-space libraries is okay-but-not-great today, improvement is on the horizon: WebAssembly should gain support for garbage-collected objects sometime soon (next year?), which should make it possible to create and deploy modular, optional "polyfills" with convenient JavaScript usability and near-native performance (which can even leverage existing sophisticated implementations if one chooses to compile from C/C++/Rust).
(8) If standardization is desirable (despite its drawbacks, see (2)), then this is one of several features that would fit a "JS standard library" system consisting of standardized, optional/polyfillable, lazily-loaded modules very well. I think it would make strategic sense to put more energy into the effort to move that forward, rather than continuing to work around the lack of a standard library system in a piecemeal fashion.
The text was updated successfully, but these errors were encountered: