Skip to content

Latest commit

 

History

History
1049 lines (548 loc) · 141 KB

april-08.md

File metadata and controls

1049 lines (548 loc) · 141 KB

8th April 2024 101st TC39 Meeting


Delegates: re-use your existing abbreviations! If you’re a new delegate and don’t already have an abbreviation, choose any three-letter combination that is not already in use, and send a PR to add it upstream.

You can find Abbreviations in delegates.txt

Attendees:

Name Abbreviation Organization
Waldemar Horwat WH Invited Expert
Linus Groh LGH Bloomberg
Duncan MacGregor DMM ServiceNow
Daniel Minor DLM Mozilla
Nicolò Ribaudo NRO Igalia
Chris de Almeida CDA IBM
Jesse Alama JMN Igalia
Kevin Gibbons KG F5
Michael Ficarra MF F5
Jordan Harband JHD HeroDevs
Ben Allen BAN Igalia
Jason Williams JWS Bloomberg
Bradford Smith BSH Google
Ujjwal Sharma USA Igalia
Philip Chimento PFC Igalia
Sergey Rubanov SRV Invited Expert
Mark Miller MM Agoric
Daniel Rosenwasser DRR Microsoft
Jack Works JWK Sujitech
Istvan Sebestyen IS Ecma International
Ashley Claymore ACE Bloomberg
Mathieu Hofman MAH Agoric
Samina Husain SHN Ecma International
Mikhail Barash MBH Univ. of Bergen

USA: And, okay, so let’s move on with the approval of last meeting’s minutes, the minutes from the last meeting have been uploaded on GitHub. Let’s give a minute for the approval. If you do not approve, as a reminder, to the last meeting’s minutes, you should speak up now. Great.

USA: All right, now that the last meeting’s minutes have been approved, I would like to ask you all to confirm the adoption of the current agenda for this meeting. You might have checked it out on the agenda’s repo that we have. That said, if you have any objections regarding the agenda of this meeting, please speak up.

USA: All right. With that, we have adopted the current agenda and we’ll move on with the meeting. Samina, are you prepared for the report?

Secretary's Report

Presenter: Samina Husain (SMH)

  • slides (see agenda)

SHN: Thank you. Great, thank you for the great start, Ujjwal, and details for the meeting and the overview. Yes, it is a solar eclipse day today, so I think the next one in this hemisphere is not for many, many years. From a timing perspective, I think it’s in the latter half of our meeting, depending on where you are. I am in the East Coast timezone, so it will be in the very end of our meeting and we will have totality for just over two minutes, so it should be an experience. And hopefully we’ll all have no clouds.

SHN: All right, so just an update from what we have discussed from our previous meeting, which was the 100th meeting in San Diego. I just -- just an overview of what we will talk about in my short presentation. So just observation for everybody, the annex slides I will not go through and I will leave it all to you to review on an as-needed basis. It highlights what are the latest documents uploaded on the ECMA server, which you may find interesting. Not every delegate has access to it, but may get access or information from you chairs. There’s always information about the statistics and the participation we’ve been having in the meetings and when the next dates are. But I will leave that for you to review after.

SHN: All right, I sent out, which many of you may have seen, to all members of ECMA an email a little while ago. We have a vacant position in our executive committee, so we are looking for nominations. It is typical that we do this only once a year, but it is also very important to be active and have engagement from members. So we have made the decision through the executive committee that we would do a nomination in a non-year-end term. So if you have from ordinary members somebody you would like to nominate. or if somebody would like to self-nominate, I would be happy to receive it. Just email it to me. It’s a good opportunity to be very active in the governance and the activities of ECMA through the different technical committees that are also involved. That’s just a reminder.

SHN: I also have a reminder on the ECMA approval. So we have already, thank you very much, the Edition 15 preparation that already began, that document was frozen very soon after the San Diego meeting. I’m just making a reminder with the dates here, and I don’t think I missed it, but if I have missed it, you may correct me, but for ECMA 402 I have not seen the freeze of that document. If it has been done, thank you. If it has not been done, then please remember that we are very close to the timeline where you want to manage the 60-day opt out and the 60-day review before the general assembly, which then takes place in June.

SHN: Some new projects and new members I want to highlight. I had already mentioned some of this in our previous meeting but we have a TC54 that has been active since December of last year. It is moving forward very well. We do regular meetings, if you are interested, the recordings of the Meetings (they’ve always been recorded) are published on YouTube. I believe it’s publicly available. The information can be found through the cyclone DS website or you may just ask myself. It is also now done weekly and they’re one-hour meetings and the committee is requiring the entire documentation and specification before it goes through approval for ECMA GA. It’s a very extensive project and very well done, if you want to participate as a member, please do so.

SHN: We also have a new proposal that we will be discussing at the April ECMA ExeCom, which is TC55, pending the name, WinterCG. Again, if there’s interest from members of this committee on that, it’s excellent that we have a new technical topic to be working on. We already chartered the TC39 TG5 on the experience and programming languages. I hope that’s moving forward. I believe our colleagues from Bergen are on the call and will give an update, and this is very good, so I look forward to this continuing to move forward.

SHN: Very happy to welcome some new members. I had noted them as potentials on my last meeting. We have received all the applications. So they all are provisionary members today. So Replay.io, HeroDevs and Sentry, just little bit of a small document, but important one I’m waiting for the RF to be signed and provide back to my attention. And then there should be of course complete participation also from Sentry. We have new members and we have a big committee, I just wanted to bring to the attention that when an application comes to the ECMA international secretariat and we start the processes, the member company after very minor exchange of documents become provisionary members. They may participate with their delegates in any TC they wish. It is important to note that they may not vote until they become an official member, which typically takes place after the general assembly. Just want to highlight that right now so the members that are new that have joined, they are provisionary members, and they do not have voting rights. In addition, I want to mention that in all of these members that have joined, there have been invited experts from those organizations to become -- to be part of the TC and check it out which then eventually become members. Your invited expert designation will change to delegate designation once you become an official member, and that can take place in June time frame. Just to let you know that your designation from going from invited expert to a delegate does not in any way change the level of the expertise you bring into the team. Just the title of the designation is just slightly different. So that’s just any clarification.

SHN: And I’m just going to pause here. I can’t see if there are any raised hands or questions, but please ask in the queue and I will address them when I finish speaking.

SHN: I also wanted to just lead from that as we have many new members, and we are a big group, and we also have valuable invited experts that I just wanted to highlight some text here. The first two bullets are texts that come directly from the invited expert application from the ECMA perspective. We are very happy to have invited experts. You bring great value. And typically invited experts are not necessarily representing a member or any organization. But in the event that they do represent a member, then -- or encourage them to become a member, then of course the designation changes, as I’ve mentioned. I also want to highlight again that invited experts do not have voting privileges. You have the privilege to be part of the committee, bringing your valuable input, and have lots of discussion. When it comes time to do any kind of temperature check, as we’ve done in the past, then typically when we don’t vote. But if such an extreme case were to happen, invited experts as provisionary members are not permitted to vote. I also want to remind from a voter perspective or even a checking the temperature perspective as we had to do in the November meeting, is very important that member companies, there are over 20 member companies on TC39, I think they are exactly 27, you all and a delegate, you all have one vote and a discussion in the temperature check. It is important that the invited experts and the observers do not necessarily participate in that. So it’s a difficult one to manage, but I look to the committee to take care of that appropriately.

SHN: Again, as we have new members and we have new invited experts always coming on, and thank you, USA, you already very clearly mentioned that the chairs always mention at the beginning of every meeting that there is a Code of Conduct committee. I just wanted to take the time, I’ve been through your website -- your GitHub page. I’ve just extracted some key words. I want to remind as the committee, as we work together, it’s very important that we work together in a very open and constructive manner on any of the channels that we are engaging for this meeting. Just as a reminder, I’ve put the up keywords we do tend to use this, I know we have lots of hot conversations, deep discussions, please always remember this, and continue to be as productive as we are.

SHN: My last point on my main slides is some feedback on the review of the solution for a PDF version. I do see that there is a slide set that has been prepared and a lot of effort has been made by Kevin and Michael. Thank you very much for that. I will wait for your discussions. I think you are very soon. In the agenda item. It was just a place siting on my slides to remember we do discuss that. I look forward to that feedback, and of course finding the next steps. And that’s the end of my slides. As I mentioned, everything in the annex, I will leave you to review, which are the documents that are uploaded. The statistics that are there, and of course, the next meeting information for the schedules that we have for not only TC39, but the G and ex-e comm. And with that I will stop sharing and be happy to address any questions. As I can see, there nothing on the queue. Okay, there now a question. Nicolo?

NRO: Yeah, you said that temperature checks have basically the same rules as votes. Unless I misunderstood. I find this weird, because except for a couple times, we used temperature check as a non-binding ways for champions to see what is the general opinion. So I think it’s good to, like, let experts participate in that.

SHN: Thank you. Typically, I’ve only been involved in one temperature check in my many meetings that I’ve been involved. And I can only use that as a reference. I do remember it being quite extensive. So thank you for that. And, yes, I understand that it’s for the committee very important because you have different contributors. So I think that would make sense to enable you to have that conversation in that sense. I don’t think you’ve ever had to vote, because we don’t vote, even in an extreme sense. Could somebody just confirm that for me.

DE: I agree with NRO that temperature checks do not constitute votes. They were very, very explicitly formed for this light weight purpose. However, even though we almost never possibly only when referring to a specification to ECMA have an official ECMA-style vote with one member having one vote, we do frequently request consensus on stage advancement in TC39. It’s long been kind of ambiguous with different delegates having different opinions on what our process currently is, whether invited experts can veto. And in my opinion, although temperature checks are not a vote, asking for consensus is sort of a kind of vote. We do definitely operate in a way that different delegates from the same member organization can kind of individually express their opinions, and I think that’s important so that we can maintain the intellectual integrity of how the commit committee works. But at the same time, blocking a proposal in serious action that we can legitimately decide whether it’s restricted to ECMA members or includes invited delegates.

SHN: Yes, may I just make a comment. So thank you both for that clarification. If you would allow me to think through this and then I will not change anything as you are doing now with the temperature check. And I appreciate this feedback, so I can make a better statement at the end of learning all of this. Go ahead, next.

JHD: Yeah, just echoing all that. Temperature checks are not votes, and we’re sort of allergic to votes, and only agreed to do the temperature checks in the first place when we all agreed they weren’t votes. So I think that’s just miscommunication. As far as invited experts, I mean, the point of consensus is getting everyone’s opinion taken into account and member status has always been irrelevant for that. If you’re participating in the meeting, you participate in consensus, empirically. If we need to change, that I think we should have a separate agenda item about that and not derail this topic now (and that would probably need consensus to change).

USA: Let’s not go too deep into this discussion, given this is purely a process related thing on a secretary report. But to move on with the queue, SYG, you’re on the queue next.

SYG: If invited experts can participate in the consensus process without restriction, I’m confused why anyone would ever become members? Like, you would need one member, invite everybody, and then we would just all participate.

DE: So I think there’s some confusion about how policy is applied and adopted here. The Ecma level policies are set based on Ecma rules and bylaws, which are adopted at the Ecma general assembly, which really all of you are all members have the right to participate in the Ecma GA. We also have these ECMA executive committee meetings, which welcome many people around Ecma. Although Ecma official votes are by ordinary members, in practice, these things work by kind of the similar to TC39 consensus building, among all attendees, not only the ordinary members. If you’re interested in Ecma policy, I think there’s a lot of work to do, and it would be great to collaborate here. Ultimately, we selected Samina as the secretary-general or sort of executive director of Ecma in order to be our trusted administrator to apply these policies. So I think that’s the appropriate level to work through some of these things. I’ll be really interested in getting more involvement from the committee in all aspects of this. Thanks.

USA: Thank you, DE.

SHN: I really appreciate the feedback and inputs that you’re providing here. It gives more clarity, and Dan’s absolutely right, we will be having these discussions at the ex-e comm and the GA. So no changes are being made, and it’s very important that we have a common understanding. So all the input is very valid and we will be very pragmatic. And I’m not sure if I can see the queue right now, so, Ujjwal, let me know if there’s any questions.

USA: Thanks a lot, Samina. Also thank you for up supporting us here. Next we have a reminder by JHD.

JHD: Yeah. So the reminder is just that we in TC39, we use GitHub Teams to keep track of, you know, permissions and stuff, but also who is a delegate and for which member and who is an invited expert and so on. So if you are the kind of point of contact of your member company, please review the GitHub team delegates team for your member company, and if there is anyone who is missing or who should no longer be on the list, please file the appropriate admin and business issue for each those changes so that we can keep things up to date. Thanks.

USA: Thank you, JHD. Also for being our super proactive administrator. Moving on with this.

Speaker's Summary of Key Points

The report covered several topics, including updates on previous meetings, new projects and members, and reminders about upcoming deadlines and procedures.

An overview was provided of the Slides, noting that documents in the annex slides would be available for review but would not be discussed during the meeting.

The vacant position in the executive committee was noted and encouraged nominations from members. It was emphasized the importance of active engagement and participation in ECMA governance.

The attendees were reminded about ECMA approval processes and upcoming deadlines for document freezes. The importance of timely action to ensure smooth progress was highlighted.

Updates on new projects and members were provided, including information about TC54 and a proposal for TC55. Participation from the committee members was encouraged in these initiatives.

Additionally, the new members were welcomed and explained the process for becoming provisional members. They clarified the distinction between invited experts and delegates, noting that only delegates have voting rights.

During the discussion, questions were raised about temperature checks and invited expert participation in consensus-building. The feedback was acknowledged and further clarification on these topics will be provided.

Ecma recognition awards reminder

Presenter: Chris de Almeida (CDA)

  • (no slides)

USA: Next up we have to remind you about the ECMA recognition awards. As you might know, we -- ECMA is in the business of giving out recognition awards to all of our colleagues who have put in a lot of their time, as well as work in making JavaScript better for everyone, and the chairs are requested every couple of months to submit anybody they think that -- from the committee who should be given a recognition award. There’s a few positions within the committee that you are familiar -- you must be familiar with, where automatically you’d be considered for a recognition award. But apart from that we should also be mindful of the amazing work that our colleagues have been doing. So I would like to request you all -- oh, CDA, you have slides for this. Would you like to take this, then, or should I just…?

CDA: I think you did a good job. I just had these slides from a previous meeting. Yes, I probably should update this data. They’re approved at the GA meetings or reviewed and potentially approved at the GA meeting. So let us know if you have anyone good in mind. And it’s helpful to have, like, the nomination, like, text itself, but don’t let that stop you from, you know, providing your idea and we can help you with that as well. Thank you.

Speaker's Summary of Key Points

  • Reminder to take action on Ecma Recognition Awards

ECMA262 Status Updates

Presenter: Kevin Gibbons (KG)

KG: Okay. Good morning, everyone. Or good whatever it is. This will be your typical brief update from the 262 editors. We’ve landed a handful of normative changes. I believe so the first three of these were decided in the previous meeting. This last item was -- in the process of tweaking some of the, I believe this was related to the first item where we are tweaking the semantics for CSP for foul. We accidentally made a normative change that was not intended and did not have consensus, and that no one caught during the review process, so we made another normative change to put it back. We don’t ask for consensus for bug fixes like that because they’re just restoring the -- what was already consensus, but we’d like to call them out anyway in case anyone was confused by the previous state or seeing that change go through.

KG: Not much in the way of editorial changes to call out. The first one here, we’re calling out really only as an example of sort of a design principle. Which is that there is an abstract operation for finding the index of one string within another. Basically string.prototype.index of. And we tweaked it so that instead of returning negative one when the value wasn’t found, it returns a special value, and this is the approach that the editors intend to take for operations like this going forward, and if you’re writing a proposal and designing such an abstract operation yourself, we recommend this approach to you as well. It just generally makes it easier for readers and since the spec is not code, it’s not like there’s any overhead associated with returning a special value instead of returning negative 1. Not returning a number doesn’t actually affect anything. This isn’t C. So just something to keep in mind when designing things like this in the future.

KG: Also, quite a few miscellaneous use cleanups and consistency fixes that we don’t feel are worth calling to the committee’s attention, but if curious you’re welcome to view the commit history on GitHub. Not going to go through the list of upcoming work, because it hasn’t changed. We are still plugging away at a bunch of stuff. Most of these things we’re not actively working on. Some of them we are.

KG: And then the last thing that I wanted to call out before we give it over to Michael to talk about the PDF is just a reminder, as Samina mentioned, ES2024 is frozen. We are currently in the opt out period. We do not intend to land any further changes to the specification unless there’s editorial changes or perhaps bug fix that are discovered in the next brief period that we think are actually worth back boarding to the present specification, but we don’t anticipate that happening. And that’s all I got. Thanks for your time. Any questions before I move on?

USA: Thank you, Kevin. There’s nothing on the queue.

update on automatically producing print-quality PDFs

Presenter: Michael Ficarra (MF)

MF: So this is about the print PDF generation. As some context, for a little bit now, we’ve been back and forth with ECMA trying to find a solution for creating good print-quality PDFs that are up to ECMA’s standards. And most recently, I had agreed to try the approach that AWB had used to see if we can -- or try to improve that approach to see if we can make it little or no effort to achieve fully automated PDF generation that meets our standards. So some more background, if you weren’t aware, AWB has been contracting with ECMA to do PDF layout on a yearly basis for the last few years, for 262 and 402, so -- two or three years. So AWB was using a tool called Paged.js. It uses the CSS Paged Media standard, so it just is a polyfill and allows browsers to support that standard. Which no browser supports today. Together with a lot of manual tweaking, and we’ll see some examples of what is needed there, so this is a lot of work and we would go back and forth with AWB during review and every time we would do a review, he would basically have to start at the top of the document and work his way down to do all the manual page breaking. Very, very manual process. AWB is not going to be doing this anymore, and as I said, ECMA wants us to see if we can do this ourselves. Now the editor group is not really interested in taking on multiple weeks worth of work every year to do it manually, so we’re trying to use the tools as best as we can to get automatic generation. If you’re not aware, ECMA has this standard for standards called TOOLS-011, which explains in excruciating detail all these details about how the specs should be presented and a lot of details about the contents as well. So I tried to follow everything within there as best as possible. And we’ll get to what I could do and couldn’t do.

MF: So definitely a lot of success. I was able to implement all the page header and footer stuff. Change all the fonts and dimensions to match what was in TOOLS-011. Get all the page numbering restarting and everything, which was very hard but surprisingly Paged.js was able to handle. Get an automatically generated table of contents, which was great. And more. So I was pretty happy with what we were able to achieve. And there were a bunch of rules that I have written that unfortunately Paged.js does not support at the moment. And these are pretty important rules to follow. And this is what most of the manual effort will have to subsidize. So there are certain places where page breaking should not occur, and I’ve written rules for all of those things using CSS and unfortunately Paged.js just doesn’t respect them. And each of these rules are written because there is currently a violation, and each of these violations would need to be discovered and manually addressed to be manual page breaks or via, like, splitting tables, splitting lists, which is not an easy process. It’s a lot of careful work. So if Paged.js was improved, we could theoretically have all of this for free, automatically.

MF: And then there are some things that I did not implement. I’m not going to go into details on each one, but some of them were because the editor group just doesn’t feel that it would improve the quality of the document. We would rather not follow TOOLS-011 in some spots. Some of it was just because I didn’t want to do the work, if we don’t know if we’re going to go this full automatic route because of all the other insufficiencies. It’s just a mix there. All of this we could do if we wanted to. Some of this we could do if we want to go down the route of fully automatic generation.

MF: The biggest warning I have here is that it’s not just that Paged.js creates layout issues, it’s that it may accidentally introduce errors in the document. So the PDF cannot be a canonical resource. It’s very difficult to notice when table rows or even just individual table cells are lost, or an algorithm step is missing between pages, or it’s just pushed off to the side sometimes. There’s some weird things it does. Sometimes the page number references get off, and I’m not sure why. There’s just subtle bugs, and, like, really, really hard to detect bugs. You have to go through the spec line by line, carefully reading it, and to try to find those. So even if we had all of the features, I still don’t 100% trust the tool because it’s pretty buggy. Even though it is very impressive. It’s a very impressive tool.

MF: I have some samples. Here a section of the automatically generated table of contents. Actually, it's a little bit outdated. It looks a little bit different than this now. Just an example. You can see it’s using the Roman numerals before the first clauses start.

MF: Here's some examples of some failures, so if you look at the bottom of 230, it splits very weird where the first column and then half of the second column are on 230, and then half of the second column and the third column are on 231. So it makes it look like it’s supposed to be another row, but it’s not. It’s a row split in half.

MF: Here you can see the table header on the bottom of 223 is split from the rest of the table, so you won’t have that on the next page. So that’s the kind of thing that needs to be not split, but get pushed to the next page.

MF: Here you can see the note when it is split across pages, it has weird layout because the actual note part isn’t pushing it. Like, Paged.js isn’t supporting flexbox there.

MF: This is actually a good example where you can see, like, figures are automatically fit nicely onto pages in the bottom right. And here is another example of annexes are supposed to follow strict formatting where they’re supposed to say annex whatever, like, centered at the top, and then whether it’s normative or not, which I think all of our annexes are, maybe. And that kind of stuff, so that’s, like, a thing that we can change.

MF: Conclusions here are, at the moment, it is definitely the case that we cannot automatically generate PDFs using Paged.js that are up to the quality standards of ECMA, especially given that TOOLS-011 has a lot of requirements. If we were to do the process that AWB has done in the past, where we manually address all of the things that Paged.js cannot do automatically today, I would estimate it would take between 50 to 100 hours. A lot of this is, like, it takes like five minutes for Paged.js just to do the render so that we can see if the change we made had the correct effect. And you're going to repeat that, you know, 1,100 times to do every page of the document. So that's a lot of work. And I would do that work once, maybe. But it's work that would have to be redone every year. And I'm really not interested in that. I don't think anyone on the editor group is.

MF: So I give a few options that we have here. There may be others that I’ve not thought of, but these are the things I can think of that we can move forward with. First one is what we recommended in the past -- no, second one is what we recommended in the past to ECMA, there are layout services that do this exact thing. It’s called layout. People usually hand like a manuscript for a book to these services and they lay it out as a book. It’s not like a design service or anything, they just want it to break nicely. So, yeah, the first option is we could ask somebody else to do this manual breaking using the HTML document and have Paged.js just render the thing as we want it.Third option is we could work on Paged.js, somebody could improve Paged.js. I don’t know how long it would take to fix those bugs and add this additional support that we need for the couple of manual breaking features. But it’s possible that that could be done in less time than it would take to do the manual splitting even once. But I’m not sure, because that code base is -- it’s like maintained by a single owner. It’s one of those. So we could also hope that the browsers implement CSS Paged Media. If they implement the standard as-is, it would support all of these things. Also CSS Paged Media I learned through this is very great and I’m very appreciative of it. I don’t see that happening anytime soon. And the last option is we just accept it the way it is. Without Paged.js. As-is print to PDF doesn’t introduce errors but it doesn’t have any of the features that we had on the first slide, like the page header and footer and numbering and table of contents and everything. That would all be lost, but at least it wouldn’t have errors in it. I don’t really have a preference between these. I don’t particularly care for whether the PDF meets the standards, but our ECMA representatives here really do, and I respect that. So we’re doing what we can to solve it. But this is what I see our options are moving forward. So I’d love to hear if there’s any feedback.

SHN: Thank you. Thank you, MF, for taking the time to go through this, and also, KG, I know that we’ve had a number of back and forth, and, yes, I understand that the relevance of this may have different -- relevance of having a PDF document may have different weights between ECMA and the -- and the committee. But nevertheless, it’s still important. The options that you mentioned in your conclusion, MF, are they in order of some priority or is there order of which is the most recommended conclusion?

MF: No, they’re just the order that I happened to write them down in.

SHN: What would you say is -- could be the most path of least resistance in finding a solution in your order of conclusion?

MF: So I -- I still would probably recommend going with a professional layout service, number 2. In the past, we had done some research for ECMA when requested to find the layout services. We found, I think, four of them, and the price varied quite a bit, but it ranged from like $1,000 to $5,000, and this would be a per year cost. And I think that they would do a better job than even the best work that we do with Paged.js. We would probably have to give them -- we would just give them TOOLS-011 and a couple of exceptions to TOOLS-011 that as I said, the editor group would prefer not to make. And then they would produce something better than what we could.

SHN: So if I understood correctly from the work, and it also validates -- some of the comments already that Allen made, that much can be done, but there a manual process, and I know that that manual process obviously can be quite tedious. You’ve also noted that. In using option 2, to pay a layout service to do this, and giving them the TOOLS-011 as you mentioned, would that somehow alleviate some of those manual things that still need to be done?

MF: So they would work with the original source document and we would not try to do the print to PDF using Paged.js. There would be no manual process for us. Their process is manual. They hand lay out the document.

SHN: Okay. I mean, I would like to -- I’ve looked at your slides already once and I appreciate the feedback you said on the meeting. I’d like to review some of it. I may come back to you, some specific questions. My last question, if I may, and I’m sorry if it’s before my time, but the recommendations that were made by the editors on where we could find some of the solution, would you forward that to me again.

MF: Yeah, I can dig that up.

SHN: That would be appreciated. Because we have such a short time or the timing is very critical until June, you have requested Allen to dot one last time enable me a bit more time to find a solution. He is mulling it over, so in the meantime, if you could send me that, we will find a solution. I appreciate this feedback, and I will come back if I have any deeper questions. Thank you.

MF: And the amount of manual work AWB would have to do this time should be significantly reduced. It’s just adding the manual breaks and doing some table or list splitting where appropriate. He wouldn’t have to redo any of the numbering stuff. That should be mostly handled. I’m still not 100% confident that paged.js would not introduce a bug, but if it’s reviewed properly, it should be less work than previous years.

SHN: Yes, you had mentioned that there may be some errors, the warning you mentioned. So that’s important to take care of. Okay -- and you did this just for 262, because I think for 402, we didn’t have any issues?

MF: About all the work I've done, it should apply to both because they both using ecmarkup, and the vast majority of the changes I made were ecmarkup to improve the print specific CSS. Breaking rules are in there as well. If Paged.js is improved, if we take number 3, like we asked somebody to implement those last couple of manual breaking overrides, both of the documents should layout entirely correctly.

SHN: Okay. You mentioned the 50 to 100 hours. That’s for that manual layout?

MF: That is based on how long it takes to do each render, versus how many manual changes there are per page, times how many pages there are.

SHN: My last comment, thank you and then I will stop because – is there an opportunity from the editors group – the work that we give to a third party, to do this, can some of the work be shared, by the editors group, or is your conclusion all the work should be to a third party?

MF: If I understand the question correctly, if we like went with a layout service, number 2, then they would start with the HTML document and about a lot of the – I guess all of the changes that I have done, they wouldn’t be relevant there, because they wouldn’t be doing print to PDF.

SHN: Mm-hmm. Okay. Just trying to estimate the efforts. Okay. I think I do need to maybe ask you a couple of questions off line just to clarify some things. And we can move forward. Thank you, I appreciate the feedback

USA: Next on the queue we have KG

KG: I want to emphasize that Paged.js is open source software and it is actively being worked on. They have a beta right which I have been filing bugs against. It’s possible that they will improve in the future to the point that it’s accurate for our needs. In fact, it’s decently likely, I think. And it certainly becomes more likely if they are sponsored. So I do think that it’s worth considering the possibility of trying to have ECMA sponsor Paged.js to improve their software. And/or to have people at Paged.js specifically take the document that we are working on and ask them to do the layout like sponsoring them to do the layout on that, which they are likely to do by improving the software in a way that would be usable for us in the future. That’s all.

USA: Next on the queue we have CDA.

CDA: Yeah. Some of the contents seem to imply that the PDF versions generated using this may have some mistakes. Do we have a sense of the extent that that might be?

MF: So the mistakes are mostly when it’s doing like these weird breaking, which we would avoid through the manual break overrides. So I would say that it’s greatly reduced because AWB would have gone through and broken in reasonable spots. But there’s still a chance, and AWB is very careful. It’s possible that he also carefully reviewed every single line and every single page number reference and that kind of stuff and just did assure that they are correct. But I don’t know beyond that.

USA: I am next on the queue because I need to make a clarification. We also previously discussed briefly the 402 spec. And on the question that CDA just asked, I wanted to clarify. My understanding is that it is, you know, ultimately a best effort kind of a task. At least for the editors. So out of all of the options that MF listed out, what we use is basically the fifth one. We would inject some CSS into the built spec and then print to PDF. It might have to be done in a certain browser that would produce the best results and then we sometimes go through the parts of the spec, through the tables and stuff and make sure nothing is clipped. But ultimately, there have been some mistakes in the past PDF versions for 402 and that’s not entirely avoidable. But the discussion today is more about how to do a better job than we have in the past.

DE: So thank you so much MF for preparing this presentation for going through this exercise. This is really helpful. Now that we are probably going towards making a request for, you know, budget allocation for ECMA to solve this problem, the lack of volunteers in committee to do this manual work, I am wondering, I want to ask SHN what is the timeline we have to make this budget request. In the past, when we made budget requests we have had some problems getting them in time for ECMA process.

SHN: For the budget, the budget for 2024 is already built. If I would say for this year, I would hopefully find a solution again, which is effective and works for everything through Allen. I have some budget for that, depending on what we like to do, to make sure this is going on, on going in the future, I need budget information at least by the third quarter when I start doing my budget planning. So we have a little bit of time for 2025.

DE: If Allen is open to it, which he said he wasn’t, maybe that changed, that sounds like a good plan to me. If Allen weren’t available, I don’t understand why we couldn’t redirect our budget to another solution.

SHN: Correct . So that can be – I just assumed it would be difficult to find a solution immediately for June. without knowing more, until I left-hand to this presentation. I also asked Allen to consider it. It is mulling it over. I don’t have a firm yes or no from him at this point in time. But that funding would be used, if he chose no, we still have to find something

DE: Great. That sounds like the budget has been allocated, if he’s unavailable, we will finalled the solutions, if we worked to have something by June. Is that correct?

SHN: That’s correct.

DE: So I guess Michael and Samina will be in touch will be about this so the details can be understood?

MF: I would like to add in the previous discussions with the layout services, the lead times were not terribly long. If we have a month, that’s probably fine for all of the layout services.

DE: Okay. Thank you.

USA: Thank you, MF. Would you like to make any concluding remarks?

MF: I don’t think so.

Conclusion

  • MF has incorporated AWB’s PDF generation advice, and found that it will still take a week or two of manual work to produce a high-quality PDF. There are no volunteers among the committee or editors to do this work.
  • For 2024, AWB will (somehow) do this work again. TC39 requests that Ecma include this work in future budgets, as it has done so for 2024.

ECMA402 Status Updates

Presenter: Ben Allen (BAN)

BAN: All right. And hopefully that is visible. We are currently in freeze, and so there are no normative changes. We have a number of relatively small editorial changes. Many of them largely metachanges, README in, stuff like that. But we do have several editorial changes related to better adhering to BCP47. All of these are changes to the algorithms that are editorial because they involve language tags we don’t use.

BAN: So the first of them is we previously mishandled single letter BCP47 tags. It wasn't actually a problem because we don’t use those. In order to clarify the algorithm and make it correct we have made it actually correct. We have also done several refactors to simply make our locale resolution algorithms look more like BCP47 algorithms. And likewise, our default locale doesn’t generate any tags with -u- extensions, and we have better documented that.

I would say the most meaningful editorial change is we had previously had an alias name that was confusing, dataLocaleData. We had locale localeData and dataLocaleData locale. We have changed the alias to resolvedLocaleData, and made several changes to the names of associated things. We have capitalized some slot names in 402. There are some slot names that have to be lower case because they are in the namespace as certain pattern strings that must be lowercase. However, here are some that didn’t need to be lower case. We have – camelCased these to adhere to the standard used in 262.

This next is unrelated to the BCP 47 changes. Previously the DateTimeFormat spec used ambiguous and non-standard language for table iteration. That is fixed. And also there are some steps where we had steps to lowercase strings that were already guaranteed to be lowercase. And then there’s a few meta changes. Most notably, this one I like: README and notes we have updated old references to the master branch to main, since we are now using main. Finally, we were missing a LICENSE.md file. So we have added that. And that is it. Thank you.

ECMA404 Status Updates

Presenter: Chip Morningstar (CM)

CM: So, as usual, not much to report. JSON is in its happy place.

USA: Good for JSON.

Test262 Status Updates

Presenter: Philip Chimento (PFC)

Slide contents:

  • Since January, a certain amount of Igalia's test262 development has been subsidized by Sovereign Tech Fund.
  • You may have noticed that many more tests landed in Q1 2024 than in Q4 2023
  • Worked with proposal authors to review tests and ensure coverage for RegExp modifiers and Set methods
  • Wrote tests for a needs-consensus PR that had long been blocked on test coverage
  • We'd like to encourage proposals to help write testing plans. Providing good documentation for this is high on our list. Let us know what you think about this!

PFC: All right. So I have a few status updates from Test262. One, happy piece of news that I can report is that since January, a certain amount of the Test262 development that – we have been doing is by Sovereign Tech Fund. You can click that link for more information about this fund. They have been funding a lot of foundational infrastructure in the past year or so, and we are happy to add Test262 to that. Not unrelated, you may have noticed that many more tests landed in Test262 in the first quarter of 2024 than the previous quarter. So this makes a difference.

PFC: Since the last update in February, we have worked with proposal authors to review tests which resulted in Test262 now having full coverage RegExp modifiers and set methods, which were PRs that among other people, the proposal authors contributed and have now landed. There are now tests for a needed consensus PR that had long been blocked. It had been open for a couple of years. This is the AsyncFromSyncIterator normative change.

PFC: Another thing that we have discussed about how to make our process easier to navigate for proposals is that we would like to help write testing plans for a proposal, when it enters Stage 2.7. Testing plans are not a new thing. They have been around for a long time. Often they are written by the Test262 maintainers. And really the proposal authors are the ones who have the expertise to write these. So it’s high on our list to provide some good documentation for how to write a testing plan. And if you have thoughts on this please let us know. And I would be happy to answer any questions.

USA: There’s none on the queue. We can give it a second. No questions. Well, thank you Philip for the update.

TG3: Security update

Presenter: Chris de Almeida (CDA)

  • (no slides)

CDA: TG3. Meeting regularly. Lots of great discussion. TG3 previously had an APAC-friendly time for our APAC friends to attend. However, these meetings were attended quite poorly and for a long time, attended by none of our APAC friends. We are happy to meet at an APAC-friendly time if we are getting them in attendance. But until that time, we are not going to do it due to the attendance. But we will bring it back, if the need arises. We have also, as part of that, when trying to figure out when we would like to move that APAC-friendly meeting to, we resolved to use the same time as our other meeting, but increase the cadence from every two weeks to weekly. So those are at the same time, which is at 12 central time.

CDA: And the other item we wanted to take care of here was, with the increase in meeting cadence, we could use some more support in the convenors group. So KKL has agreed to join the convenor group, pending the approval of the committee. So I am requesting consensus for KKL to join the TG3 conveners group.

+1s from JHD, CDA, NRO, MM, JKP, DLM

Conclusion

  • KKL has joined the conveners group for TG3
  • TG3 meeting cadence increasing to weekly
  • APAC-friendly meeting times being removed from schedule due to limited attendance

TG4: Source Maps

Presenter: Jon Kuperman (JKP)

JKP: Cool. Just giving a quick date on TG4/source maps. And some of the things we are working on. I think the first big thing is that we began work on the test suite. Last time we talked about how we will internally gate proposals, trying to get by from implementers and test coverage. We began working on a test suite. The goal is similar to Test262, in the sense we want to have extensive coverage for all of the features and spec of source maps. Unlike Test262 we will want suites of tests that run in the generator build tools as well as debuggers and browser tools and then also error monitoring tools. We are – some tests can be shared among all 3. There’s a link in the agenda. And we would be eager to get any feedback from folks that have worked on this type of stuff before, as far as how to organize the test looks like, things like that.

JKP: The next thing is that we had historically had two GitHub repositories. We had one for the specification and one RFC and features. We merged them together. It’s found under tc39/source-map. There’s two more PRs to move on. As far as everything is else concerned, this is the new source for everything.

JKP: The big feature we are working on, which we have been calling the scoped proposal, is a proposal to embed information and source maps, which allows debuggers to like reconstruct applications, scope, sets a break point or original variable names, including break points and stack traces and also showing all the new stuff, not show anything that is added by build tools or a compilation. We have been grouping on it. We have begun work on implementation. We are helping guide the specification itself. We have a link here to the proposal. We would love any feedback, especially if you’re involved in debugging or source maps generating tools. We have been making good headway with existing specifications. We keep finding text like this where it says that the VLQ values are limited. Do we error if it’s not? Or is it invalid if it’s not? We have been having some great progress hardening the existing spec, making it more clear and working with consumers and generators to see what they are currently doing.

JKP: We have another RFC. Specification says that we need to support this source mapping URL. It’s a comment that can be in JavaScript CSS or WebAssembly and point 2, where a source map lives. It doesn’t say how to extract it. We are thinking of mandating people, parse the script and find the comment, but we got feedback for performance reasons, that’s not viable. So NRO put up an RFC with two ways of getting comments out: one with regular expression and one with a full parse. We would love feedback that is interesting to people. It’s linked here.

JKP: The last thing I think is that we set dates for 2024. It will be June 24th and 25th in Munich. Hosted by Google. We would love people there in person or remote. The themes are adding tests to the tests suite, implementing scopes in tools, and then working together to finalize the text. In September we want to come to plenary and look for approval on that. If you are interested in attending, please let me know or join the matrix room and that’s all for my update. Thanks very much

SYG: Hi. I had a question about the test suite. You mentioned a few different things like DevTools. And tooling. Are there different subs?

JPK: I think it'll be the latter. Right now we focus on browser dev. Firefox and WebKit and one coming from chrome. In a sense we have not gotten there yet. We don’t have any tool-specific ones. We will end up with 3 suites with shared tests between them.

SYG: In that case, if I may recommend, I don’t know if this is realistic, the browser vendors and the JS VM teams have existing externally maintained test suites to run. The least amount of friction would be that if you’re going to introduce, multiple new suites, that land them in the ones that require JS to run can be done as 262, it’s not part of 262, just like 402 exist in Test262 for the ones DevTools, I am not sure if an – this is a good opportunity to part one. That is a core one for things that require a browser shelf. There’s no additional stuff that it has to set up.

JKP: Yeah. I think that’s great. Would you mind if I followed up with you off-line about some of the specifics? That sounds good and the type of feedback we are looking for right now.

SYG: Sure. Please do.

TG5: Experiments in Programming Language Standardization

Presenter: Mikhail Barash (MBH)

MBH: Yeah. All right. Hello, everyone. This is a short update. At the last plenary meeting we got consensus to form the task group , and the co-conveners are YSV and myself. In the end of March, we had the first meeting. We had 8 participants, representing the companies mentioned on the slide. We introduced the idea behind TG5, and discussed planned areas of investigation , as well as responsible research practices. In terms of cadence, we plan to have meetings on the last Wednesday of every month. And we have alternating time slots so we accommodate the US, Europe and Asia. So the next meeting is Wednesday 24th of April. All of this is already in the calendar. We also have a TG5 repository, TG5 team and a Matrix room. Thank you, Chris, for facilitation – we will give a presentation at the Standards Group of the OpenJS Foundation at the end of the month about the current and planned activities of TG5.

MBH: We also intend to arrange TG5 workshops which will be colocated with the hybrid meetings of TC39. We see this as an important step for building an academic community around TC39. So the first workshop we will colocate with the plenary meeting in Finland in June. The plenary starts on the 11th of June and on the 10th, we have a workshop, hosted in the city of Turku, two hours by train from Helsinki and the schedule is so that it is still possible to attend that community event on Monday in Helsinki. And I have just opened a reflector issue with almost all the details there.

MBH: And we (YSV, MF, and myself) are currently preparing the TG5 charter document and will make it available as soon as it’s ready. That’s it from me.

USA: Thanks, Mikhail. Also, thank you for, as you mentioned, the meeting timings, making them as – sorry, as inclusive as possible. It’s great, happy to hear about TG5.

Updates from the CoC Committee

Presenter: Chris de Almeida (CDA)

CDA: Yeah. Very briefly. CoC committee, meeting regularly. Two issues. One dealt with and concluded and another new one reported and we will work through that as per our process. Other than that, just a reminder, we are always looking for new individuals who would like to join the code of conduct committee. If you are interested, please reach out to someone on the code of conduct committee. Thank you.

“array last” proposal withdrawn

Presenter: Jordan Harband (JHD)

JHD: So this is just a notification. The champion withdrew that proposal because Array.prototype.at is already in the language, and they don’t see the need to continue it. So that has been done and the proposal repo is updated. That’s all.

Conclusion

  • The ‘array.last’ proposal has been withdrawn

TC39 website - call for translators

Presenter: Chris de Almeida (CDA)

CDA: The TC39 website is translated into the languages that you see here. There was a change to the – one of the menus. We are in need of help from the community for translations. JWK opened a PR for Hans, simplified, so thank you JWK. We are still in need of German, French and Russian. We are also always welcoming brand new translations, but the immediate need is for the ones that you see here. Thank you.

Temporal normative bugfix

Presenter: Philip Chimento (PFC)

PFC: (Slide 1) My name is Philip Chimento. I am going to be presenting a short update on the Temporal proposal. I am a delegate for Igalia. This work was done in partnership with Bloomberg.

PFC: (Slide 2) So first a short progress update. I know you are used to hearing this, but we are approaching the finish line of the proposal. Currently, the proposal champions are focussing on making sure that all of the in-progress implementations are successful. So what we have been doing recently is fixing bugs found by implementations. I will present one later in the presentation. We are making targeted changes to make things easier for implementations and addressing concerns that are not specific to any feature like the code size. I will give more detail about this in one of the following slides.

PFC: (Slide 3) If you are an implementation of the language, we would like your help. We want to make sure that any doubts or blockers are addressed before the next plenary in June 2024, to make sure there are no further obstacles to implementation. So if something is preventing you from implementing Temporal, let us know. We would like to work with you to resolve it very soon. So don’t wait. If we need to make changes to the proposal, we want to make them now and present them in June. If you want to talk about something or ask questions, we have a meeting biweekly on Thursdays, 8 o’clock a.m. Pacific Time. If that time doesn’t work, let me know and we can set up another time to talk. We already have some people working on implementations, who actually join regularly to get the chance to ask any questions that they have. For example, we have somebody working on the Temporal implementation from Boa that joins every time, and we have somebody working on a polyfill implementation and joins regularly and this has been helpful both for the implementations and for the proposal itself. Among other things, it’s how we discovered the bug that I am presenting a bugfix for.

PFC: (Slide 4) A short summary about concerns that have been raised. Several we discussed in the hallway discussions in San Diego during the February plenary. So there are concerns about the compiler binary size on Android on V8. We investigated why it's taking so much space, you can read more details on the issue, but we made a proof of concept showing how to reduce that size, without necessarily changing anything from the proposal. We heard from JavaScriptCore that there are concerns about the growth of the standard library, but not specifically about Temporal.

PFC: (Slide 5) We heard from SpiderMonkey concerns about how this affects the installer size for Firefox. I would be interested to know more about this. And maybe do a similar investigation to find out where the size increase is coming from. We have heard from V8, concerns about the complexity of the proposal. And we heard from Adam Shaw, the polyfill implementer that I mentioned before, who has been going over the duration arithmetic and found some issues. So in response to the concerns about the complexity, we are considering what we could drop or reduce the functionality of. One thing that we are talking about is user-defined calendars and time zones , and the associated classes and/or the subclassing. In response to the duration arithmetic bugs, we are considering whether we could drop the relativeTo parameter in the add and subtract methods of Duration. These are things that are actively under discussion. So I am not making any proposals right now. As I said before, we are open to suggestions and want to hear from you if you have opinions about this. It helps us if concerns can be made specific.

PFC: (Slide 6) So that said, I will move on to presenting the normative change that we would like to ask for consensus on today. (Slide 7) That is an edge case, in rounding ZonedDateTime. If you round to the nearest day, it was possible in rare cases if you were dealing with a daylight saving time change, that an extra day was added. You can see this code sample that would have exhibited the bug. And what the correct and incorrect results are. I would like to once again thank Adam Shaw for discovering this bug. You can click through to the pull request if you would like to see exactly how the fix works. There is a test262 PR pending to add coverage for this case.

PFC: (Slide 8) That was it for what I wanted to present. Are there any questions before we move on to asking for consensus on the normative change

USA: Yeah. First on the queue we have DLM.

DLM: Hi. First, I wanted to say that I support the normative fix. Overall we are all with the direction that Temporal is going. We have been staying fairly current with the editorial changes. Thanks to the hard work. We did have some things about installer size, but those have been resolved. We spoke with the product managers for desktop and products we are in the clear. That being said, I think we are not asking for a reduction in complexity, we are not sad to use the user classes. That’s it. Thank you

PFC: Thanks. That’s good information. Thank you very much.

SYG: Yeah. Some more color to the complexity reduction request. This is not – I don’t think we have done a super thorough job reviewing, myself certainly not, being a non-expert in the space to call out things to cut. The user customization is an easy thing for me to point out as a possibility, given that I am not familiar with the space. And it seems like a much more niche use case to customize aspects of your daytime and calendar handling. If the champions are willing to reduce complexity there, V8 and Chrome will take any reduction in complexity that we can get. There is the code size concern and I want to thank you Philip for doing the investigative prototyping there. But there is also just the ongoing maintenance concern that here is the thing that is very likely to get written once and then let go, and that V8 is maintaining the libraries in perpetuity. And the fewer knobs it has the better chances to be well maintained in the future. It’s a pretty broad, high-level concern. And given its size today, any kind of reduction in code complexity, in code size is welcome.

PFC: Okay, thanks. That’s good information as well.

PFC: All right. (Slide 9) I would like to request consensus on this pull request linked here that fixes the rounding bug that I described.

USA: All right. Let’s give it a minute. I would also reiterate that any sort of statements of explicit support are also welcome. All right. There doesn’t seem to be anything in the queue. So you have consensus.

PFC: Okay. Thanks. (Slide 10) I took the liberty of writing a proposed summary for the note, which I will show here and paste into the notes.

Speaker's Summary of Key Points

  • Consensus was reached on a normative change to fix a bug in rounding that occurred in rare cases having to do with DST.
  • Over the next few weeks, we plan to dig into remaining concerns from TC39 delegates, particularly with the goal of reducing complexity.
  • Follow the checklist in #2628 for updates.

Duplicate named capture groups for stage 4

Presenter: Kevin Gibbons (KG)

KG: Okay. So duplicate named capture groups. As a reminder, since it’s been a couple of years since I presented this, this is a feature that allows you to have the same capturing group name into two parts of a regular expression, with the constraint that they can’t both participate in the match, which is to say they have to be in different alternatives. So separated by a pipe. But otherwise, it works exactly like you would expect. You can use back references to the capturing group name. The.groups object of the result will contain the value from whichever one actually matched. In the case of repetition groups the last repetition group defines it, the same way it works for regular capturing groups.

KG: The specification text is quite simple, although it’s not been reviewed by all of the other editors yet. That is, it’s not been reviewed as a pull request. But of course the specification text is approved as part of getting to Stage 3. It has been shipping in Safari for a while, and shipping in Chrome in 125, which isn’t stable yet. I believe it’s currently in the dev channel. Since this is only causing syntax to become legal which wasn’t previously legal, there isn’t much risk of web incompatibility. SpiderMonkey uses V8's RegExp engine as the underlying thing for their engine. They have to do a little bit of integration work to expose the new functionality. And I believe that work is underway. So it’s not yet shipping in Firefox.

KG: I believe those are all of the requirements for Stage 4. I would like to ask for consensus for this proposal. Is there anything on the queue?

DLM: We support this for Stage 4 and our implementation is in progress.

SYG: Yeah. Looks good to me. Chances of this being reverted in Chrome are very, very low. So we don’t need to wait.

MM: I support.

USA: Great. There’s also statements of support by WH and DE. I think you have overwhelming support, Kevin. This is probably the most statements of expressed support that I have seen. Great work.

KG: Okay. I will take that as Stage 4 then. Thanks very much.

USA: I have a more exciting proposal for you. Would you like to try the second one in 7 minutes?

KG: Yeah. Let’s do it.

Speaker's Summary of Key Points

  • Proposal is shipping in Safari and Chrome and underway in Firefox

Conclusion

  • Stage 4

Set methods for stage 4

Presenter: Kevin Gibbons (KG)

KG: All right. So set methods for Stage 4.

KG: This proposal is the pull request is open and passing CI. It’s approved by one of the other editors. And as well as have review from `jmdyck, who is an external contributor who is thorough about catching certain issues, and his feedback has been incorporated. There's nothing relevant to implementers. This proposal has been Stage 3 for quite a while. It was blocked on tests for a while and tests were landed and that unblocked implementation and shipping.

KG: Again, it has been shipping in Safari since 17, since September. Chrome since 122, which was a month or so ago. I forget. But Chrome is stable. And I know that Firefox has an implementation, but I don’t believe that they have flipped the switch to ship it yet. But the pull request is open and approved by one of the editors. And shipping in two major implementations. Those are the Stage 4 requirements and I would like to ask for consensus on Stage 4. We have talked about this recently, so I won’t go through it in much detail, but to recap, this is adding 7 different methods to Set.prototype: union, intersection, difference, symmetricDifference, isSubsetOf, isSupersetOf, isDisjointFrom.

USA: Great. You have two statements of support on the queue already. Three now.

DLM: Yes. It is completely implemented and it has not been shipped in release yet. I hope to do this this week. Get the work done, that is; shipping will be a few weeks later.

Also +1s from MM, WH, LGH, JHD.

KG: Okay. Thanks all for the explicit support. And for implementations and so forth. That’s all I got.

Speaker's Summary of Key Points

  • Proposal is shipping in Safari and Chrome and underway in Firefox

Conclusion

  • Stage 4 achieved

Joint-iteration: confirm our stance on issue 1

Presenter: Michael Ficarra (MF)

MF: So issue number 1 was presented at the last meeting when I was presenting the joint iteration proposal. Was it the last meeting? It might have been the meeting before. I am not sure. We had talked about all the open issues, one being number 1. And the issue is asking whether we should have a joint iteration facility on arrays, as well as iterators. I am not particularly opposed to this, but I am also not really interested in pursuing such a facility on arrays because I don’t think it provides as many obvious benefits as it does on iterators which are harder to coordinate iteration of. This was asked by JHD. I asked at the meeting whether anybody thought we should do this. Nobody spoke up in favor and one person spoke against it, that person being GCL. I took that as committee feedback to not pursue arrays in this proposal. Or not – but as a separate proposal, but JHD,understood that differently. That we may be just delaying that. But I am looking to advance joint iteration to Stage 2.7, at the next meeting. I feel it would be ready for it. And this will need to be resolved one way or the other before that happens.

MF: So I just wanted to see if there was anybody who had strong opinions in either direction on this. So that we can hopefully move joint iteration forward. Not during this meeting; at the next meeting. Anybody in the queue?

JHD: I am just adding some color here. So if the committee in general feels that it’s best to do it separately, that’s fine. But pretty much every design decision made for iteration seems like it would constrain design decisions for arrays, such that there would be very little to talk about. It would be a process overhead and time delay to do it as a separate proposal. That’s fine - what is a few months in the lifespan of JavaScript? But, you know, I would essentially be just duplicating what is in your proposal and then writing a bunch of text and making a repo and stuff. So I can do that, if that’s what the committee thinks. It just seems like a waste of time for me, and for the committee. But if it’s decided to do them together, then I am happy to do whatever work is needed to contribute to this proposal, so that MF doesn’t have to have additional burden for something he’s not particularly interested in, including spec text and tests and whatever. That’s intended as less a carrot, than a lack of a stick. And I see MM is asking what value does it add? There’s been a number of folks commenting in various proposals and spaces over time that iterators are slow, it’s ideal to avoid them. Some people have brought experience from other languages that it’s best to – they prefer the clarity of using a simple list format, whether that’s an array or whatever, over using a full iteration thing. This benefit is lesser in languages design with iteration with first class primitive from the beginning. In matrix, in one of the languages, sometimes they prefer just using a straight up list.

JHD: I like to use iterators and the iterator helpers would use this version of the proposal, when I am doing multiple operations together. I would prefer to work with arrays and if this is only an iterator helper, then what I will be doing is either writing my own function or using the helper and immediately converting it back into an array, which adds a lot of performance overhead. That’s the value, that arrays and iterators, it’s nice when they have similar operations. And I can use them in similar ways. And then if I want to refactor in either way, it's relatively trivial to do that.

MM: So let’s make sure I understand. So the main motivation here is performance. Performance aside, is there remaining motivation for this?

JHD: Yes, and simplicity. I personally find (and I have seen this expressed by others so I am not completely alone) often it’s simpler to think about and reason about a static array of stuff and transform that, versus kind of a stream-like approach where you’re chaining a bunch of transformations. Even though the effect is the same, the mental model is a bit different.

MM: Let me directly go to my general concern with such things is, the psychological size of the language. Psychological size in terms of cognitive burden on programmers. The argument here would be that there’s already quite a lot of parallelism, a strong analogy where the iterator helpers came from. So given that we are adding this joint iteration to iteration helpers, it reduces the cognitive burden in the language in a way, not increases it to also to arrays in order to keep the parallelism of the two systems

JHD: Yeah. In general, that is also my philosophy, that similar things should have similar operations, even if on their own, we might not have added it to one of the things.

MM: Given – I value cognitive burden over, you know, length of spec text or implementation complexity, so this sounds like a good rationale all around. I am in favor.

CDA: All right. We have less than 2 minutes left. SYG is next.

SYG: I want to – this is like one of those mechanical things that may be contra the goal of reducing cognitive burden. I don’t think I have a strong opinion whether joint iteration ought to be added to arrays or not. But we have had multiple web incompatibilities to the extent that we are not very interested in adding the array prototype methods. So if we were to add these, I want to clarify is the thinking we add these as a static method on the array constructor and if so, does that help the cognitive burden thing or makes it a little bit worse? Because it’s unlike the other array prototype methods.

SYG: The first part of the question was to MF. Is the plan – I guess your plan is to not add these to arrays. So maybe the hypothetical doesn’t help.

JHD: My plan would be to give the feedback about prototype methods on arrays in general, I would probably just use them as statics. To me, the placement isn't as important as the presence of the operation - and especially with editor hovers and type hinting and things like that, I don’t think it will make much of a difference.

SYG: You can also do the to and from array via an iterator intermediate. So if the goal – so that goes into my second topic, if we tease apart the performance motivation the convenience add or the cognitive burden reduction goal, Mark, what are your thoughts on the fact that you can already, for any iterator operation, you can go to and from arrays. We have static helpers for that. The affordance is there today.

MM: Yeah. I think – I think, therefore, I would not think it’s a problem, if we omitted it. I wouldn’t warn it. But in general, are there other iterator helper methods that exist on iterators that don’t have a parallel directly on arrays?

MF: Yeah. There are many iterator helper follow-up proposals that are adding things that are not on arrays. The minimum set we arrived at in the original iterator helpers was defined that way because they were things that were easiest to get through as a bundle. Everything else was going to be pursued individually one by one. Because they mirrored the array methods, they were the obvious set.

MM: So if the expectation is that iterator helpers will over time grow methods that are not parallel functionality available to arrays, then the symmetry is already broken, or expect it to be broken, so omitting this one would fit into the broken symmetry. Either way, what that says is the cognitive burden is a wash. It doesn’t particularly argue for it. And I will defer to others on the motivation, purely concerned about the cognitive burden motivation.

SYG: I am done with my items.

CDA: We are technically past time. MF, if you want to take a look at the queue, and maybe we can get through quickly, we can give a couple more minutes.

MF: I am fine with just capturing the queue for now. And I think we don’t necessarily need to resolve this during this call. I just wanted it to be resolved before the next meeting. If we have the eyes on it, like the attention of the people who care, please continue the discussion in number 1, I think we can resolve this just fine. I had 2 points to do a wrap up with I guess, one was, if we add this to arrays, I would like to not set a precedent that every iterator helper method we pursue needs an array parallel. And the second point I wanted to make was … I’ve lost it now. Apologies. I should have written it down. Yeah. Please continue on number 1. Unless anybody wants to make a – point of order right now to continue the queue. If nobody objects, I will capture the queue for myself and share it within the matrix then so we don’t take up more time than we need.

KG: I would like to get back to this if we have more time we need to fill later.

MF: Yeah. I will request the chair, we will add an extension item, if we have free time later.

Promise.try for Stage 2.7

Presenter: Jordan Harband (JHD)

JHD: So, I’m talking about Promise.try. I was hoping to ask for 2.7 during this meeting. One of the reviewers has not confirmed, but the other reviewer has, as well as all of the editors. There’s only one open question to resolve: do we pass arguments to the function? So specifically, we have this pull request that is relatively small, if you ignore the generated output. It adds argument forwarding. The proposal on main takes a call back. And calls it with no arguments. This pull request, which was requested by a number of delegates just also forwards any additional arguments provided to promise.try to the call back. There’s no other change. So my hope is to get consensus for 2.7, with this pull request. At which point, I will merge it and begin work on the tests.

CDA: Mark?

MM: So the addition of the arguments, I like that. But if that’s there, wouldn’t people equally expect it on patch and even then? Isn’t – so once again, a cognitive burden thing. And let’s leave then aside. Is it something we can actually add to catch?

JHD: I mean, the difference with catch and then, I think, is that those are callbacks added to an existing Promise. You’re already in a promise pipeline using then/catch/finally.

MM: I see.

JHD: Whereas this is when you are creating a Promise pipeline, or entering one.

MM: Okay

JHD: And so I agree – on their surface, they seem similar. But I think that the parallel between then, catch and finally API versus syntactic is more important than Promise.try vs then/catch/finally, even if they weren’t conceptually distinct, which I think they are.

MM: Okay. I accept that. That seems like a good rationale.

JHD: Thank you

MM: The other question I had is, is promise.try equivalent to wrapping the block with an async IFFE?

JHD: Yes. (types it out)

MM: Given the symmetry with async IIFE, explain why it’s worth adding try rather that be just encouraging people to use async IIFE.

JHD: Sure. This was discussed during Stage 1 and about 2 discussions. But essentially, the first part is that if you’re supporting older environments, syntax is more expensive to transpile. But the other thing; that question was the reason why this proposal was stuck in Stage 1 for like 9 years. This receives 44 billion downloads. There is empirical evidence that the functional form is preferred to using or desired for many over just using an async IFFE. I have a suggestive aesthetic opinion, using an invoke function is messy and with worlds with modules they are obsolete and I prefer to keep it that way. That is subjective and nobody has to agree with that. This is saying that the package and the functionality it provides is – I am trying to obviate it.

MM: I like the empirical evidence. I have no objection.

JHD: Thank you.

KG: This was a response to the thing MM brought up earlier. I want to make sure it’s clear that this is different from then and catch, in that those are callback-taking methods. But this is more a generic function invoking method. Anything that takes a specific function of a specific form, like function prototype call or function prototype apply, which is a way of invoking a function, makes sense to forward arguments. Things which expect a specific form, like catch, it makes less sense.

SYG: I want to clarify something I didn’t understand about the older environment argument. Where JHD said, the syntax transpilation could be expensive. So the situation is that there is an older environment that does not have async await, where you have to transpile away async await. But they have the new Promise.try method, which is a new standardized thing? I don’t understand that

JHD: The Promise.try is polyfillable. Async is not. I mean, it doesn’t have to be installed in the environment. It can be a function. Promise.try is a very tiny subset of what an async function can support. And so like to – if that’s [T-T] only issue, certainly you could write a static analysis, transformation, that tries to determine when someone is using an immediately invoke async function for this purpose and replace it with that function. In practice, that doesn’t exist.

SYG: I see. Specifically, the concern is that – okay. If it’s for users that are – in source, pre transpile source, writing modern JS, but targeting old environments, so old they don’t have async await, native support, for those users if we standardize Promise.try, you can ship than the await transpilation. Is that the correct understanding

JHD: That’s what I meant. I don’t think that’s a primary motivation for the proposal. That’s a side benefit for those of us who do that sort of thing. The primary motivation it’s clearer of what I am trying to do than any form of any invoke function is.

CDA: KG?

KG: Yeah. I am fine with the main motivation of this. The polyfill-ability one, I am confused by. Like, you could have a package that did that. If you are not –

JHD: Right. You’re right. And I think me mentioning that, caused more confusion.

KG: Okay.

JHD: Yeah. Polyfillability is not a motivation we have ever agreed on as a committee as something that motivates design decisions or justifies the inclusion of anything, and I am not doing that here.

KG: That’s all I wanted to establish.

DRR: I mean, I think, you know, one of the arguments here is, is clarity. And as – I really don’t know if I am totally sold on the use case. But if we are, and the whole goal is like clarity, try really sounds like it has something to do with exceptions in some capacity with promises

JHD: It does.

DRR: Yeah. I mean, it is, but…

JHD: The specific case this is trying to make ergonomic is when a function throws a synchronous exception.

DRR: So it co-ops that. You can’t say something like Promise-Resolve with calling the function itself

JHD: You can’t because it throws the exception. You have to wrap in a promise catch or whatever.

DRR: Okay. Got you. This is effectively doing. Calling the function, and then wrapping that in a try catch and rejecting instead

DRR: Fair. Yeah. I wish it was something like adapt

JHD: I am not attached to the name. It’s like you can see the other users, it’s always been called try. No, there’s an attempt in there, an fcall (but I don’t think fcall is anything anyone would support). attempt is an interesting alternative, but it only appeared once in the list and even that library still has try.

DRR: Okay. Got you. All right.

JHD: Given that, I would like to ask for consensus for 2.7 with that pull request merged, that forwards arguments?

CDA: You have a + 1 from MM.

CDA: Do we have any other voices explicitly supporting promise.try for 2.7? DE?

DE: So in the discussion, we heard a number of people being sort of vaguely wondering about the motivation. I guess that’s how I feel about this proposal as well. It doesn’t seem bad, but it’s not something I would personally reach for. I wonder if we should do something like a temperature check to understand how well motivated people in the committee feel this proposal is. I know that’s not usually the way we use temperature checks, but I am a little bit concerned about the, you know, proportion of skepticism versus explicit support.

JHD: I mean, if we feel that’s appropriate, we can certainly do that. But that would be I think the question to ask when going for Stage 2. Stage 2 approval is committees approves the motivation

DE: Sure. But this is quite common in proposing things for Stage 2.7. If the proposals that I got to Stage 2 didn’t require more motivation after Stage 2, it would be a lot less work. Yeah. This is why we have these conservative defaults in committee that we are requiring this repeated consensus, to make sure

JHD: Yeah. I mean, I think – when we certainly go through the exercise if we feel it’s a good use of committee time. But the – if there’s no negatives and some positives, and a significant amount of user-land evidence that it’s desired, that seems pretty straightforward to me.

DE: So I wanted – leave it up to you, whether you want to allow the temperature check. If you think it’s inappropriate here, then let’s not do it.

CDA: Let’s go to MM on the queue.

MM: Yeah. I am fine with doing a temperature check. Before we do the temperature check, I want to add a cognitive burden argument in favor of this proposal. Promises have catch and finally. So people looking at that would naturally look for Promise.try and I think it’s less surprising for it to be present in a way that works, that chains well with catch and finally than it would be to be absent.

JHD: Thank you. I agree with that. As the champion, it’s self-serving, to disallow the temperature check if offered, and I am not trying to be self-serving. If folks think it’s a good use of committee time, we can certainly do it. I am not seeing any indication that it’s a good use of committee time. But I want to defer to the room on that.

MM: Let’s do a temperature check.

JHD: Okay. Let’s do it

CDA: Let’s define exactly what are the parameters, what do the different choices signify?

DE: Maybe the temperature check is on? Does this proposal seem useful to you? I think the strongly positive to unconvinced/confused spectrum is kind of perfect for that sort of question. What do you think? JHD? That’s like the question

JHD: Yeah. If anyone has a more negative sentiment, jump on the queue and stress it. Otherwise, the default labels on the emojis are sufficient.

DE: "Does the proposal seem useful to you?"

KG: Can we clarify. Are we saying does this proposal seem to you to be useful, or does it seem to be useful to you? Because there are lots of things I personally am never going to use, but sure, it seems useful.

DE: Okay. Seems to you that to be useful is weaker in all senses. Yeah.

JHD: Useful to somebody, in other words.

CDA: Okay. Is that clear for everyone? Is it not clear to anyone?

DRR: Restate it once more, please.

DE: Does the proposal seem to you, to be useful generally?

DRR: I think that seems clear

CDA: The temperature check interface is now visible. I guess we will give it till another minute and a half or until there is no movement.

NRO: My position doesn’t seem bad, but I am not convinced it is useful in general. Other people with my position are 'indifferent', how is that different from what it means.

JHD: Like you’re – you don’t you’re not convinced by the arguments

NRO: I am not convinced by the popularity because it’s like – like I can’t see the benefit of the packages and it seems like the position that others mentioned. Like, this is not bad, they don’t see it being useful. I just put "indifferent" as DE expressed the same position about it.

JHD: Yeah. I mean, it’s more like, do you think it’s useful to sufficient people? Obviously people are saying this is useful for me and you can’t invalidate that. It’s obviously you’vesful to somebody. But so yeah. I mean, I don’t know. I think either one is fine.

CDA: All right. I think we have gotten all the votes we are going – we just had a new one show up. It appears that the totals basically are 6 positive and 10 negative.

DE: Indifferent isn’t quite negative. I think it’s best to come back with a little bit more evidence for this. Obviously, the vote doesn’t come to any sort of conclusion itself. But that’s what I would recommend to the champion

JHD: Yeah. I mean, I think my argument is complete. Like, I don’t know what additional evidence I could provide. And I mean I thought I had made that presentation when it achieved Stage 2. So I am not clear on what value that would add. So if someone wants to withhold consensus for 2.7, for me to do that, that’s fine. But I don’t really have a – I would need a concrete call to action to what to bring back because it seems that that is there

CDA: We are almost out of time. MM is on the queue

MM: Yeah. I am indifferent to abstain, not negative. That is there to be the negative. Based on this you should call for consensus right now.

JHD: Then yeah, I will repeat that consensus for 2.7?

MM: I support.

WH: I support.

MM: (on queue) Support

BSH: I support

CDA: A + 1 from TKP. Okay. Are there any opposed? Are there any who are not explicitly opposed but would like to state some dissenting views for the record?

DE: I kind of want to dissent from Mark’s interpretation of indifferent as abstain. You can abstain if you abstain. I voted indifferent to mean what Nicolo said, but I am not objecting to consensus.

MM: It’s not in front of us anymore, but I don’t remember there being a choice to abstain.

DE: You can just not vote.

MM: I did not mean abstain. I voted for the reason same as Nicolo, which he just explained

JHD: To be clear, to me it’s a weak negative and not someone is not willing to block consensus on it, but they wouldn’t vote for it.

DE: Right. Yeah. I think that matches.

CDA: Also they had multiple opportunities if they were going to block, but they still can. They can blunt out right now. Hearing nothing, and then of course Dan’s dissenting view on the meaning of the temperature check is recorded for the notes.

DE: A few people typing this chat. Maybe... let them share their thoughts.

CDA: We are past time. So unless it rises to the level of blocking consensus, we are going to move on. Okay. Promise.try 2.7 congratulations

JHD: Thank you

Speaker's Summary of Key Points

Some hesitation about motivation; a number of people are unconvinced of the utility - but nobody objected, and multiple members are convinced of the utility.

Conclusion

Promise.try has stage 2.7.

RegExp.escape for stage 2.7

Presenter: Jordan Harband (JHD)

JHD: All right. So now we have regular expressions escaping. This one has – does have all of the open questions resolved, including the hex escaping we discussed in the previous meeting. All the reviewsers and the editors have signed off. As a review, what RegExp.escape does is the same thing, it takes a string and escapes it so that you can make a regular expression with it. And it will do what you expect.

JHD: The escaping it does, now, is much more thorough and verbose than in the past. As a result, the output is much – is not just much safer, but it’s in theory, safe, to create a regular expression. Even if you can concatenate the string with something else, that has meaning, inside regular expressions. So I am requesting consensus for Stage 2.7.

MM: So we reviewed this in Agoric. RGN who is not available today, but my understanding from what he explained when we reviewed this is that there were – you know, the original agreement about how safe this was that allowed it to go forward is that except for the even/odd backslash, it was safe in essentially all contexts. What I understood from Richard was that if there’s a – if there’s an additional cape, backslash before the first character, then that property is restored and without that, there was an exception to that property, a particular context that could be confused. I can look that up. But do you know what RGNRGN is talking about from previous feedback from him?

JHD: I mean, he’s filed a number of issues that have all been resolved. I am assuming that that’s what they are. If not, I am not aware of it.

MM: When we went over this, just very recently, like in the last day or two, the feedback from him was that this issue was not resolved. Is the first – does Reg regular escape put a backslash before the first character of string being escaped?

JHD: I don’t think it does that unconditionally, no. I am pulling the spec up now. [inaudible]

MM: Okay. I need to look up the – it will take me a moment to get the –

JHD: I mean, the meeting is – we still have a few more days this week. I would be content with Stage 2.7 conditional on this issue either being resolved or a non-issue. I will bring it back later this week.

MM: So let me just make sure that we’re on the same page. If there is a realistic issue, and if it is solved with an extra backslash somewhere, since we have already given up the readability of the output, would you have a problem with a backslash as necessary to restore the original safety claim?

JHD: Correct.

MM: Great.

KG: I do want to clarify the original safety claim, though. It’s not that using RegExp.escape is safe in contexts, but safe in contexts where it doesn't clearly mean something else. If you put RegExp.escape immediately after a single backslash, then yeah, the output is not going to match the thing that you put in the RegExp.escape. It’s going to do something else. And that's impossible to avoid as far as I am aware. And there’s a couple other place was that property, in particular, off the top of my head and I am not going to claim this as a complete list, but after \x or \c or \u.,

MM: \c and \x ring the right bell. I don’t think RGN was talking about \U.

KG: U is isomorphic to X here.

MM: In that case, it’s probably included. Does an extra backslash solve \c and \x and \u?

KG: So it’s not an extra \ per se. The solution to those is that if the first character is an ASCII letter then you escape it with a hex sequence the same way that if the first character is an ASCII digit you escape with a hex escape, which we needed to do because of backreferences. After \1, you need the output of RegExp.escape to not be interpolated as that being a part of the escape sequence. We didn't do that for X, C and U, because there’s no reason to have \X followed by the output RegExp.escape because that clearly is not going to do anything sensible. But I believe escaping ASCII letters are sufficient, I'll have to look up exactly what characters can occur after \C to confirm that. And to be clear, nothing this – nothing can possibly solve the issue of putting it after a \ by itself. So this will only apply to those three.

MM: Yeah the original agreement gave up on even versus odd backslash. That was understood. It was just that I didn’t want there to be any other contexts and at the time, we had the agreement, we believed there were no other contexts.

KG: We did mention X and C in the original presentation, to be clear. I mean, if you are saying that this is an issue, that’s fine.

MM: Okay. So I believe you. I just don’t remember that. And in any case, since we have given up on readability, I don’t see any reason not to do this, if there is a solution that gives us safety and strictly more context

KG: I am not opposed to doing this, but I don’t want to phrase this as giving up on readability. We have decided that we are making tradeoffs around readability that favor, for example, not having to change the grammar of regular expressions, which is a fine tradeoff to make. And we could decide we want to make the tradeoff to favor the ability to use this after \X, C, or U over being able to read the output, if the output is a bunch of ASCII letters. That’s a tradeoff we could choose to make, but I would not phrase it as giving up on readability.

MM: Okay. That is the side of the tradeoff that I would strongly prefer. I think that safety dominates and as far as I am concerned, I have given up on readability.

KG: Fair enough.

CDA: We have 10 minutes left. Waldemar is next

WH: A couple items. One is about safety, one is about the recent change to make escapes less readable. The proposal makes the argument that it’s safe because it escapes all whitespace and newlines, as stated in the safety explainer. But the proposal does not escape newlines. So I don’t understand what the intent is.

KG: That is surely an oversight. The purpose of escaping whitespace, to be clear, is that there is a proposal for X mode RegExp which allows whitespace to be ignored unless you did `+whitespace or whatever, which can improve readability. You won’t use new lines in literals but can use them with the RegExp constructor, as you already can. So the intention was to escape everything that was usable there. Okay. So, sorry, WH, this is my fault. I had it in my head that whitespace included line terminators. This should include the JavaScript LineTerminator characters, so CR, LF, and then the two paragraph separator and line separator unicode code points.

WH: Okay. Following up on this topic, there was also a GitHub issue about surrogate handling and what happens if more Unicode characters get added to the whitespace category. But there is a safety concern, which wasn’t addressed, in that if there is a whitespace non-BMP unicode character and somebody dribbles it to RegExp.escape one code unit at a time, it won’t recognize the unpaired surrogates as whitespace. Then they concatenate them and those become whitespace which is unescaped.

KG: That is true. The obvious fix for that is to say that we also escape unpaired surrogates.

WH: Yes.

KG: I don’t see any reason not to do that.

JHD: That would be a trivial spec text change.

WH: Okay. So that covers my safety point. My readability point is that I don’t have ASCII codes memorized. And I would much prefer IdentityEscapes or more readable escapes to escaping via \x with ASCII codes since it’s much easier to understand the output and it doesn’t affect safety in any way.

KG: To be clear, for some of the items in this list - for example dash, you can’t escape dash outside of a CharacterClass. Are you suggesting that we modify the RegExp grammar so that \– is legal or only the ones that already have escapes?

WH: You can’t escape dash outside of a CharacterClass?

KG: In a U mode Regex.

WH: OK. I am not suggesting we modify RegExp grammar in any way, but I am suggesting, for the characters for which the IdentityEscape exists and is uncontroversial, that we use it. I would prefer \n instead of \x0A for line break and I would much prefer \. to \x2E and \ instead of \x20.

KG: \+space has the same problem that it’s not currently legal in u-mode RegExps.

WH: Okay.

KG: But there is a subset that is legal.

WH: Yeah. For the things which are currently legal, I would prefer to use those. But I am not asking us to change the RegExp grammar.

KG: As the person who originally was trying to get us to change the RegExp grammar to use all of these, I am happy to recover what readability we can for the subset that is feasible.

WH: Okay.

JHD: Same.

JHD: To summarize, it sounds like the additional changes that I should attempt to make sure, one is that unpaired surrogates should be escaped. Another is that we should attempt to restore readability for new lines and perhaps a list of other characters whichever characters we see fit that also are legal in both U mode and non-U mode RegExps. So like \n instead of the text code for it. And potentially an additional change that Mark was referring to with the first character in the string. And with those three changes, I would then come back at a future meeting, not this one, because that’s too much change for me to be comfortable trying to shoot from the hip on, to request 2.7 with those changes. Does that sound like an accurate summary?

MM: Except for the "potentially". I am asking for that

JHD: Okay. Yeah. If we can get an issue filed for that, MM that would be helpful. But that is included

WH: I would like to push back against MM's request for escaping the first character if it’s a letter. I don’t understand the rationale for it. If you’re escaping user inputs, the context they’re placed into should be a valid regular expression on its own. You shouldn’t be concatenating \x3 followed by user —

MM: That was – I mean that was exactly why I was opposed to this entire proposal in the first place. And was insisting on a template tag that could do context-dependent escaping and deal with the backslash even odd problem. The thing that convinced me to go forward is the understanding that the only context that remained problematic was even or odd backslash. And RegExp are sufficiently complex and have sufficiently large surface areas, large number of features, if it’s anything more than just even or odd, it just drops out of memory.

KG: For \x followed by RegExp.escape, I just don’t think someone is going to try to do that and expect any particular behavior.

MM: I think that having a simple to state safety property is a very, very important aspect of having a safety property.

MF: Kevin can you clarify why you think \x is different from \0?

KG: \0 is a totally reasonable thing to write. You are expecting to match a null CodePoint followed by some user input.

MF: And \x is an IdentityEscape for x.

KG No. No one is writing that.

MF But it’s valid.

KG In non-U RegExp, that’s true, but no one is writing that.

MF: Okay.

MM: got clarification from RGN. The issue is, issue number 66. He’s not able to be online, but he let me know he sent me a link to issue number 66.

JHD: Okay. This one is currently closed. So if I need to reopen it, and that’s fine.

MM: Have some clarification from – further clarification from Richard, my interpretation is yes, you need to reopen it.

JHD: Okay. Will do.

MM: Wednesday and Thursday, Richard should be back.

CDA: Okay. We are just about at time.

JHD: And that conclusion from before, I will adapt that into a summary in the end, in the notes. No advancement today. I will come back at a future meeting to request 2.7 again. Thank you.

Conclusion

  • Will make additional changes and return in a future meeting:
    • unpaired surrogates should be escaped
    • we should attempt to restore readability for newlines and perhaps a list of other characters, whichever characters we see fit, that also are legal in both U mode and non-U mode RegExps (eg \n instead of the hex code for it)
    • an additional change from MM/RGN with the first character in the string.

Make eval-introduced global vars redeclarable for stage 2.7

Presenter: Shu-yu Guo (SYG)

SYG: Okay. This is literally the same slide deck from last time. So I will quickly go over it again. It’s a recap for folks not here last time, but the content and the normative changes I am asking for are basically unchanged.

So there’s this thing. In the spec, there’s a slot on the global scope basically called VarNames. And what is this thing? In general, in the language, we disallow LexicalBindings beings like and using bindings and var bindings to share the same name in the same scope. We throw a redeclaration error when you have conflicting var versus LexicalBinding names. This is generally true in all the scopes except the global scope, which is special because 1: it is an open scope, meaning there is no syntactic you can do to close the global scope. In the HTML embedding, you can open script tags and add more declarations. The special thing about the global scope var binds, you can get property descriptors and access them – they are properties, literally.

SYG: So what do we do when we try to extend the general thing of disallowing LexicalBinding names in the same scope with var binding names when extended to the global scope? Something like this is disallowed. This seems good. If you have a var x in one script and let x in the other, with the same name, those conflict, the name X conflicts. So we disallowed that - fine. We also disallowed this. If you have a non-configurable global property named X, we also have that set of names conflict with LexicalBindings. Fine. We also make special pains to disallow this. Which is that if you have sloppy direct eval at the top level, sloppy direct eval is allowed to introduce new var bindings, into the enclosing – caller’s var scope basically. So in the first script tag the caller is the global scope which means that eval introduces X as a global var binding. And because it’s a global var binding and we want that to conflict with LexicalBindings, we also disallowed this currently. This seems like good motivation. This seems fine, a fine thing to disallow. Except how do you implement this?

SYG: So the quick detour is first remember the direct eval var semantics. These are in all contexts. With some special upshots for the global context. When you have a direct sloppy eval and introduces a var binding, the binding that it introduces is delete-able. So it is configurable in the property descriptor sense. But in general, it is delete-able. Even if you introduce a var at a function scope, that var is also delete-able. When you are at the global scope, this adds a property to globalThis like every other global var. The upshot here is that as a configurable var to globalThis, but wait you can manually add a configurable property to globalThis and we don’t disallow this case. If you manually add this, you are allowed to shadow the configurable global property with a lexical binding. Which means that in order to disallow this case, this snippet, but to allow this snippet, it means that we need to introduce a new kind of thing to track specially the global properties that were introduced via var.

SYG: And that is what 'VarNames' is. It is a list of names on the global environment, the purpose is to introduce a direct eval var from ordinary configurable vars. Ordinary vars, not used, do not to be tracked via VarNames because those are non-configurable properties. So my claim is that this is - knowing the implementation complexity, who are the use cases here? Are there use cases? This was a question I put out to the committee last time. You shouldn’t use sloppy direct eval to introduce them. Please don’t. That’s terrible. And you can already redeclare them, but you have to delete them first. The extra thing we introduced to – there are three cases where we check for name conflict between lexical and VarNames and then throw SyntaxError... Declare a let or const with a like named let or const. Number 1 is conflicts between lexical and other lexical. Number 2 is conflicts between lexical and var. Var introduced syntactically, not direct eval. Because these syntactically are not configurable. This has the consequence that things that you define configurable, like a lot of top level functions, maybe things we put on the global scope that non-configurable catches in those case. 3 is the special rule that is for the direct sloppy eval. And my proposal is to remove number 3. And the upshot of removal is that this is now allowed. let x will shadow the eval introduced var x, exactly like if you typed globalThis.x = whatever.

SYG: And the update is that this be moved to a proposal, so it could move through the stages normally like a proposal, instead of as a consensus needed PR. That is done. And they would have liked some time to consider the ramifications here. I believe this is a web compatible change because it’s moving from a redeclaration error to a non-error. But you also shouldn’t be doing this

SYG: With that, I will take the queue questions, if any, before asking for Stage 2.7

MM: so, first of all, I want to say, thank you for having moved it to a proposal. I read it carefully. This was a great presentation. You presented all the issues very clearly. I am quite in favor of this going forward. But I would like to hear WH’s opinion on this as he has investigated a lot of thinking of global scopes versus the global lexical environment conflict between var and let and all that. Waldemar?

WH: I haven’t had time to look at this in detail. I don’t have an opinion.

MM: Okay. I am in favor. Since you haven’t had time, are you willing to let this go forward to 2.7 based on the presentation?

WH: Yes.

DLM: (on queue) supports 2.7.

CDA: Nothing else in the queue right now.

SYG: Okay. If the queue is drained, then I would like to officially ask for Stage 2.7 consensus.

CDA: All right. Support from DLM

KG: Support.

MM: Support.

SYG: I am going to take that as good. To be perfectly clear, this will require engine changes in all engines, I believe. It's 2.7 because there are some tests, to test the current behavior, which is opposite of my proposed behavior, I plan to remove those tests before the end of this committee and come back for Stage 3 because that’s basically the – I will update those tests. Now I have 2.7, I will come back for Stage 3 at the end with like a 2 minute item, an FYI, if there’s time for that. Thanks.

Conclusion

  • Stage 2.7

ESM Source Phase status update and layering change

Presenter: Guy Bedford (GB)

GB: This was – I wanted to give an update on the proposal that got to Stage 1 last time, which was the ECMAScript module phase imports. So while we have phase imports for the source phase proposal, that represents the actual source phase. This represents it as a phase for JavaScript itself. There’s this concept if I wrote a module in the module system. We had implementation feedback on the source phase imports from SYG on Friday, which I will get to at the end of the presentation. So it kind of ties into a lot of the concepts and so yeah. We can go to the next slide.

GB: So to just go back to the use case that we are seeking to solve with the portability for JavaScript modules, this problem that we have identified for worker instantiation in JavaScript environments where you can pass an arbitrary string. It’s a path, but it’s relative to the base, not the current module. It’s not really a Moddable pattern. It’s difficult to have things work everywhere and there’s these frictions. So we identify this as a problem that would be useful to solve with phase import for ECMAScript modules And this is exactly what we solved for WebAssembly modules through source imports, which we now have through the same module system you can gain access to not just an instance, but the compiled module to create multiple instances and to create instances with different values and WebAssembly. Different import values. Static analyzable. You can see where using WebAssembly modules and integrates well with CSP policies. So they will work towards having sort of – it’s not just one policy we can actually associate the policies with. And so the idea is that we get a lot of those same benefits through defining the phrase for JavaScript modules. Because if you can import a module and treat it’s a capability for that module, and in a lot of ways it represents the key. All the same benefits, tooling support for workers where tools can see when a worker is being used, it’s easier to see what modules are referenced and when a module is referenced, you can do a relocation and reference to the bundled version instead.

GB: The security argument doesn’t hold those. This is one of the things that came out of the discussion. Worker on the work is a policy, while the other is script source. And so there is something that would need to happen for this worker integration which is a CSP refinement of the CSP policy. And the idea here is that in the integration, because the worker source might be more refined than script source, you need to reexamine the URL that that module originally came from. And verify that it’s still processed as the policy and throws the error at the time you try to create the worker. The way this works, all implementations today have the URL in the host defined metadata on the module record. We basically just define that we pick up the URL again out of the host defined field, and this is HTML integration. And reverify. I wanted to flag it because it’s a really interesting and important property that needs to be maintained. And we have identified through further discussion.

GB: So one of the big questions we had, which phrase are we specifying? There was discussion as to whether it should be the instance or the source phase, and there’s tradeoffs. 'Instance' represents a graph of modules linked together, whereas a source phrase represents a single compiled ModuleSource before it’s being linked. So it’s the compiled module. When you think about things like transfer, moving green capacitors and agents, transfer within instances is a complex thing to think about because what does it mean to share graphs? To share states? Share instant state and errors and things like that. Source is much more immutable to transfer, it deals with the transfer problems. Though are gating, like the worker case, where you check CSP policy. One does have to be aware but it’s simpler than instances. At the same time we already have specified a source phase. For WebAssembly.module, so we build on that and create instanceof that without having to start from scratch. In addition, the design of instances is mostly constrained to the loader use case. You think about linking, think about host hooks, think about membranes, wherewith the source, it’s a building block that is supported in loaders, but doesn’t touch on the loader problems.

GB: For that reason, we have made the decision to go forward with the source phase for now, which is what we are designing for in specifying a concrete ModuleSource that extends AbstractModuleSource. The source already associates the registry key per the phasing model. So when you have a ModuleSource through the Wasm integration that host defined information on it already has the URL, ready has the information about the registry key. That’s how implementations use it And so we effectively build on that to be able to support the – I should have reordered these too. Basically, to support dynamic import of sources as well. This is what makes it useful beyond the use case. If we make it dynamically importable, we layer with the use case for module expressions and module declarations So that’s the direction we have decided to go for now. And we are going to follow up with some further design work. In addition, we want to specify the reflection API on the built- in objects to have imports and exports to get the imports and exports of the module. And these could go on the AbstractModuleSource. So they apply to any ModuleSources because they are cycling module records ideally. But that’s an open question, to some extent.

GB: So for dynamic import, I can import a source. The source represents that kind of capability to import the module in a lot of ways, the source phase extends from the key in the loader, so you import the key. There’s no state associated with it. We go from the module to its key and from its key to the import. What that means is that you get the same instance every time you pass the same ModuleSource into the dynamic import. But a different instance, depending on what context you’re in. So I will get to that later in loaders integration. It acts like a capability of the module and it’s checked. The CSP check has already happened. You have module object, you have the capability to use it. Unless you’re passing to refine the CSP policy.

GB: So the same will effectively works with WebAssembly modules. Import a source and treat that as a capability for its import as if you had imported without the source. Whether using these would be dynamically portable is very much up to the host integration. In some ways, if you’re running on an unsafe Wasm URL, you have a weak CSP policy. From a security perspective, it might make sense to support – it requires defining the key at the time of construction. So that’s a question for the actual effectively other specification and integration to determine, it’s still an open question for now. But the default if you create it, it wouldn’t be able to define the key

GB: Import and exports have an initial design, based on providing some standard kind of analysis information about what imports there are and attributes and what phase they're in, and the names of the exports and star exports and re-exports. This is just initial design, but we’re starting work

GB: The feedback we have gotten so far: YSV posted an issue. One thing, the new worker would be able to apply directly to the source and because we know it is a module worker, we could exclude the need to provide the {type: "module"} object which you would normally provide as a second argument into the new worker constructor. The argument here is that there is a readability benefit in maintaining the type module object because you can tell by looking at the code, if you are creating a module worker or a script worker.

GB: So I think one of the main things we identified here is that when you construct a worker, with a ModuleSource, you know it has to be a module worker. So there is no type module, we should have an early error and not try to load that as a script. So it will definitely be an early error. Whether we leave in the type module or not is an open question. And I think that will just have to be part of the integration discussion, but it could go either way still. So it might have the typed module aspect in it.

GB: In the instance layering, we presented the SES group. Sorry. The new TG group determines, it layers well with instances. There are some compartment-related questions. Because when you dynamically import a ModuleSource it’s associated with the registry key which is effectively the URL and its attributes. Whatever the URL is in the environment. And dynamic import has different meaning or context in different compartments because the same source could have a different meaning. The simple example to think of is, passing a source object across a compartment and what you expect to get. We should definitely expect to get the representation of the same module in the loader. There are questions to address and terms – these are things to bounce around and avoid for a long time. But they are starting to be top of mind when we do this work.

GB: So just to update on the layering, by going with the source phasing imports design, we will be – seeing something like this. Previously we had two branches for instance and source. I think source phase will allow us to layer with module expressions and module declarations. The specifications should become very direct specifications on top of the base. And then module instance would effectively move inside of loaders. So specifying module instance would be a process of loader specification. So that is how that layering updates.

GB: So the imports layering question, that should be brought up to give some of the background on this briefly. In the – and in terms of when we get to discussion, I would like to say, if we can first discuss the layering question and then all of the other design questions around source phase in general, just to use our time well.

GB: But to give the background on that and dive into that discussion. We are allowing non-ECMA262 objects to be provided through this source phase. And when we brought that up, at plenary, the argument from Jordan was that we are having non- – the sort of objects that are host defined being returned through the module system. And so we should have some kind of way to bless them or know that they are – they have certain properties at least and Jordan said, there should be a strong branding check that we can have a two string function that cannot be forged, that – and this was deemed to satisfy that binding question. And the way it worked was, we say that these objects extend from an AbstractModuleSource, and you can’t do that in user-lands and hosts can then set the slot and then you know that you have got something that can only be deemed a ModuleSource object. Even though it is a host defined object and not a ECMA262 object.

And then we could potentially by setting that initial groundwork, we could make sure that we are supporting the properties we need from ECMA262 perspective.

GB: SYG’s argument, on the other hand, is that internal slots pose implementation difficulty, because host-defined objects usually don’t have ECMA262 internal slots. And so instead, the idea is that we should use a host hook to define this interaction. What we have got is, a new host took to basically the same way. Host get module. In place of that internal slot. It’s not normative. It’s straightforward layering chain but it affects the layering and the WebAssembly as an integration layering in particular which is going to WasmCG tomorrow, to be able to get a phrase 3 vote on that. So this layering change is critical to that process as well.

GB: So the PR we have up for the layering adjustment is, adding the new host hook and updating the toStringTag to call the host took and if it’s not one of host objects, that the host decides is a ModuleSource object, it will return undefined. It’s the same behavior. But the new layering. So that PR is up on the source – original Stage 3 source phase import proposal. PR 62. And I would like to then go into the discussion on that with Shu and make sure we have got everything clarified and go into the wider discussion. So let’s take a look at the queue.

SYG: So I want to give some background for the motivation for it a bit more. So the implementation difficulty is not an impossibility kind of thing. It’s also not a – there’s like two folds I think. Somewhat of a difficulty and somewhat of a future proofing, mainability argument. And of the argument basically is we don’t, today, have subclassing as a way to cross the host boundary. Today we don’t say in order to embed JS into something, HTML, whatever, that one of the ways the host can hook into things is to provide proper subclasses of JS things. Defined in 262.

By requiring slot checks, that basically means that at the spec level, at least, providing real subclasses becomes one of the ways to cross the host boundary and that is not something I want to – that’s not why I want JS spec to be layered. There’s unintended consequences I thought through. I feel it could have unintended consequences if we assume things that the host provides can in fact be real subclasses. So the argument is that today, for this proposal, it is – as you have pointed out correctly, it’s an editorial change. I think that extends in general to like for anything I want to be expressed as a slot, you could do a host hook. At least locally. What I am worried about is that if we don’t express it as a host hook, there are non-local things that are hard to miss in review for future proposals and future editions that assume a proper subclass and you do something in (?) giving rise to difficulties down the road we have to reason to. If you directly express it as the host having to provide objects that behave a certain way and you can check if it behaves a certain way via the host hooks and you make explicit at the host level, that better reflects reality than saying, here’s a slot. Check it has to be done at the spec level, but you can implement this as a host hook, if you really want because these are observation equivalent. Figuring out whether it’s equivalent gets harder and harder as more and more behavior gets hung off of the slots. So I would like to not do that as a spec thing. But I think you are correct, it’s strictly editorial. I would welcome feedback from other web implementers here on the subclassing question because I would like it as a precedent that we keep the spec host boundary to be exclusive host hooks with the constraints we put on them.

GB: Clarifying points, if that’s okay. On the question of internal slots, to be clear there are no longer any internal slots with the change. It would only be host hooks.

And in terms of proper subclassing, the requirement is that we state the host hook that gets the ModuleSource object to begin with when you do the source phase import, the only requirement we state there is that the object should have its prototype as the abstract module prototype. So it’s a requirement on the object, but that’s the only requirement on the object. I guess I am not understanding your concern about the proper subclassing. Are you concerned that behaviors of the subclass might not carry through, through the prototype chain somehow? We gave the example of imports and exports being an open question, whether they exist as a reflection on the ModuleSource object for JS only, or whether they exist so WebAssembly would have the same API. That’s the benefit we get out of this, by specifying that prototype. But now it’s just a minimal prototype that has nothing on it. I would be interested to hear what the proper subclassing you have a question about

SYG: Let’s take NRO's question. It sounds like perhaps the same question.

NRO: Like, can you like – what do you mean my subclassing here? Because like I understood the problems about the internal slot and not level the chain. There are other cases where the product chain crosses the language – like the spec.

SYG: I mean, specifically, that there are hosts – I don’t mean prototype. I will leave it at that. I mean, I would not say there are not host provided objects that must have some special internal slots that are not otherwise present unlike ordinary objects. Basically. So in this case, previously, where the current spec text draft has this ModuleSource name slot, I think, but this is – but the host providing the different kinds of modules is an expected extension point by the host which means that a naive literal implementation of the text, they create objects of that have particular slot. By subclassessing, it must have the particular internal slots that are not already present in ordinary objects. And that can be implemented via host hooks. But it is editorially clear and I think it’s easier to think about, if we explicitly spec them as host hooks instead of as internal slots. Does that make sense? I can give a more concrete example in an actual implementation, if that helps.

NRO: Yes. Thank you for clarifying.

JWK: Did SYG already answer my question?

SYG: Reading your question, I don’t think I did

JWK: If we are never going to cross the host boundary by adding new methods on the AbstractModuleSource.prototype, I think we should remove the AbstractModuleSource.prototype entirely.

SYG: So let me try to answer that. I think there is still value in having AbstractModuleSource prototype and the root cause, there is at least notions of subclassing in JavaScript. Do I have something on my prototype chain? Kind of subclassing. This is more akin to duck typing. Conforming to an interface. It’s available to have an instanceof something and take as an affordance by checking if a certain prototype is on its prototype chain to mean it behaves like the prototype suggests it should. That is a separate notion than representationally, like the layout of the instance. Is it a subclass of another class? This notion, the representational notion is more about things like internal slots and private names and stuff like that, where you can have through return override, for example, make a representational subclass of an object, by like calling the super constructor to install its internal slots and private names on to the instance and later change the prototype chain to something else. Like unfortunately, mechanically, these are separate in JavaScript. I am solely concerned about the representational thing. The behavorial thing of like "is this conforming to an interface", we use prototype chains for that and I think that is very valuable to have. If the source phase imports is supposed to then the host defined objects they should behave and look like they behave like other things vended through source phase inputs. But I don’t want to constrain that the representation must literally be a subclass of the thing we defined in 262.

GB: That’s a really good point, I think to separate the concept of the prototype from the layout. And if we can just – because at the moment, our primary contention here is around the layouts. I mean, it’s worth noticing that the proposal is at Stage 3, normative changes require, you know, agreement at this point. So from an implementation feedback perspective, we’re not defining anything about layout here. But what we are defining is like the spirit of the specification here is that the object you get back, we should be able to get from that object basically to the underlying registry key. However, that’s done. So there is some kind of key hash or a URL and module attributes or whatever, there should be some way to make that and an association with the compiled artifacts in the case of JavaScript, however the JavaScript is made. I can speak to the implementation design space further than that, what the intention is, but maybe we can have a discussion further to clarify what the V8 embedding looks like there. Specifically, if we can focus on this PR from the specification point of view, are there any reasons you think we have to be concerned about landing this fix at this point in time? Do you think we should bring this back to committee? Do you feel there’s more design work that needs to be done?

SYG: I am convinced at this point, that it is strictly speaking an editorial change, but I believe it’s an important editorial change to set editorial precedents. So we don’t accidentally make unintentional normative changes in the future. So strictly procedurally speaking, I don’t think we need to ask for consensus here. But given the motivation, which is to prevent an accident change in the future might be good to get affirmation from other browser vendors. I don’t think anything strictly needs consensus.

JWK: I want to make sure I understand SYG correctly. So you mean, we should keep it for programmer to test like instanceof, but we will never add a method to it because that requires the module to have some internal slots. Is that correct?

SYG: I mean something weaker. It's fine to add methods.

JWK: But to add a method to be useful, you need access something internally. Right?

SYG: Right. And the internal access – the question is editorially, should the internal access be done by host hook or internal slot and my argument we should do it by a host hook instead... defined in 262 via a subclass.

JWK: Okay. I understand. Thanks.

CDA: We are past time. Guy? Any final thoughts? Anything you want to record for the notes?

GB: Sure. If we can just note that the editorial change is moving from a slot, internal slot model to a host hook model, which, yeah, I think that’s it.

Speaker's Summary of Key Points

  • the editorial change is moving from a slot, internal slot model to a host hook model

Conclusion

  • Proposal remains Stage 1

Atomics.microwait() (without mini wait) for stage 2

Presenter: Shu-yu Guo (SYG)

SYG: So this is a continuation of something I presented last time at reduced scope. Last time, I had presented something I was calling micro and mini waits in JS. Since then, done thinking and think the most bang for the buck in the short term is to drop what I was calling the mini waits, which I will go over. This is the same exact slide deck as last time. The motivation is that we add these atomics, low level atomics stuff to help like Emscripten to write better locks. It’s important because the glue boundary between JS and Wasm and emscripten resides is the system boundary. They implement things like LibC and pthreads. So that Wasm can compile like a C++ application for C application. Under the hood use the pthreads and that works. And that means that you have to write locks. So how do you usually write a lock? The usual way to write a lock nowadays is to have a fast path and a slow path in the acquisition. You want it to be fast when uncontended. It’s faster to not sleep your thread and to occupy the core with the spin lock if you leave that unlocking is imminent or that nobody else is holding the lock. If you design your application, that there’s not a lot of contention for some resource, you want this path to be very fast. Otherwise, if you need to like sleep a little bit, that’s a syscall and wait for it to wake up. You are adding slow downs to your application for no reason if you believe most of the time the block is uncontended. So you want there to be a fast path when contention is slow. If the problem is, if you do a spin lock naively, this has undesirable and unintended consequences on the CPUs. CPUs really like to be hinted - I'm calling X86 out here, ARM does better. But they like to be hinted that you are doing a spin lock. So that they don’t have to relinquish the core it assessment so that it is easier that CPU and the caches to load the value you're trying to acquire. The upshot is that if you don’t hint the CPU, you get worse performances and schedules. All you see here, locks and stuff like this, or lock free lock. If you do a spin lock, you call something that yields the CPU. In the loop of the spin lock itself. There’s an intrinsic called _mm_pause that’s available in a lot of C compilers. There's the yield instruction on ARM. That’s the point of this instruction, it exists to hint the CPU. It has no observable effects except hinting the CPU. There is no observable behavior, exempt timing. The intention is that this waits for a short amount of time, hundreds of CPU cycles. And there is an iteration number that gets passed. Because it is a common best practice to do some kind of exponential back off so that you spin for too long. And should you choose to implement that, it helps to hint the microwait method itself with what iteration number you are at so you don’t wait as long and the constraint is that microwait of N waits at most as long as microwait of N + 1. And there’s this other thing of the slow path which is you want to be efficient with when the lock is contended. When someone else is holding the lock for what you think is a good while, you don’t want to spin the CPU. That will pin the core to do nothing useful. It will increase battery consumption and increase power consumption. So you want to be efficient when you know you are not going to get the lock. The way to do this in native code is to put the thread to sleep.

The problem with this in JS is that we can’t put the main thread to sleep. We can put worker threads to sleep with atomics style wait. But there’s a policy decision made that we don’t ever block the main thread for responsiveness. And I had previously proposed that we let you clamp the time out, so you are allowed to block it for a bounded amount of time. I am dropping that from this proposal because I don’t know a good way to do it. The previous proposal that I had proposed hand waved away a lot of details. Specifically, there’s a complicated policy space in HTML around how much time is desirable to allow blocking, if at all. There’s notions of something called an idle period that I was hoping to use, but it turns out it was probably never going to be available in any meaningful amount that it’s going to cause immediate timeouts anyway. So the short story is that I don’t know a good way to do this on the main thread. I am going to drop it for now. I don’t think the value to figuring that out is – there’s a good pay off in the short term. For the microwait thing, it’s a thing that improves Emscripten efficiency, in the short term. So the reiterate this is basically for Emscripten, it’s a very narrow use case thing. On paper, it’s for anybody who is writing locks and lock-free code. In practice, I recognize that’s a small code of JavaScript. Emscripten is large, and someone who compiles this C++, they are using P threads and using Emscripten I believe implementation in JS of some very low level calls that enable P threads, namely futex.

In practice, Emscripten reach is wide. Number of developers very small. Affected developers very large.

SYG: So I am asking for Stage 2 for the proposal of reduced – this of proposal for the reduced scope of microwait. No more waiting on the main thread nor some clamped amount of time. I will take questions from the queue.

MM: Okay. It looks like I am first. Just have some questions and clarifications. The most – shared array buffer is currently effectively optional in that it’s appearance on the global object is optional that was intended to make sure the array buffer was optional. And we didn’t make atomics optional, but it was sort of under the understanding that atomics is useful if shared array buffer is not present. This does not use a shared array buffer as an argument. So you could engage in a microwait. However, the concern would be not whether you can cause a delay of a given amount, but whether it enables a program to measure duration. And I don’t see a way this can be used to measure duration. But I want clarification on that. I also wanted your feedback on the idea of making shared array buffers and atomics jointly actually normative optional.

SYG: So first part, I think I do not think it enables measurement of a high – it doesn’t enable writing a high-res timer, any more than writing a user function and then doing like performance.now before and after to calculate delta. I don’t think it gives you any more power than that, which is something you can already do.

MM: Let me clarify my question. If you’re in an environment in which all other means of measuring duration, such as date.now and anything else, including indirect measurements through indeterminism, have been denied, so in other words, a shared array buffer does give you an indirect ability to measure duration. If there’s no other other way to measure duration, but atomics is still there without shared array buffer, with this microwait, that does not take a shared array buffer’s argument, it seems like it could cause a delay of an amount that you specify, which is fine. It does not seem like it enables the measurement of duration by itself. Is that correct?

SYG: That is my understanding. Yes.

MM: Okay. Good. In that case, I have no objection, but would like your feedback on whether – I mean, it’s tangentially related, but since it raises the issue of atomics not being useful – or not being completely without functionality, in the absence of shared array buffer, what is your feelings of making atomics and shared array buffer actually jointly normative optional?

SYG: so my feedback is, I don’t think we can. And I will clarify. I say I don’t think we can because while the shared array buffer constructor can be removed, or is removed in certain contexts, on the web, at least, the thing that we block, the ability to do, is to communicate the shared array buffer to different threads, not the ability to create a shared array buffer. So concretely, even if the shared array buffer constructor is not present, you are still able to create a shared array buffer via a shared Wasm memory. In an environment where communicating that shared memory is still denied. So if you post message that memory you get an error. But not when you try to create the shared array buffer. And Atomics also already work on non-shared array buffers and the reasoning for that was, applications that want to ship one copy of their binary compiled for shared array buffers, can progressively degrade to what a regular array buffer in contexts where shared memory is turned off and they don’t have to recompile and ship a completely different binary that don’t use atomics. They can get normal array buffers except for waits, which is disallowed because that will immediately deadlock yes

MM: That was very clarifying. I didn’t realize that. Let me just clarify, atomics on a non-shared array buffer is completely harmless with regards to all my concerns, it sounds like you confirmed that. And I am happy to clarify my concerns and we can take this off-line, if this is too much, on this topic. The other thing is, with with regard to the other way of attaining a shared array buffer, I am concerned about enabling, conforming JavaScript implementations that don’t have access to Wasm, don’t have access to concurrency, I would like to enable a sequential JavaScript implementation to be one that conforms to the spec, and that would – and that has an effect on what we begin as normative optional. Absent from the global, it does not suffice for another reason, which is we have got in progress the getIntrinsic proposal and according to a precise reading of the current spec, shared array buffer could be argued to be a hidden intrinsic, which is revealed by getting intrinsics and that is contrary to the intention when we made the global itself to be optional.

SYG: We should take this off-line. Let’s take this off-line. I want to drain the queue through the rest, but I want to confirm your first question, my understanding is yes.

MM: Good. Thank you.

CDA: We are at time.

SYG: So I wanted to ask for consensus. Can I ask folks in the queue are those material on consensus and if so, can I ask for a 5-minute extension?

MM: I am happy with 5-minute extension and I do not object to consensus.

SYG: Waldemar?

WH: I wasn’t able to find any spec text for this. Is there one?

SYG: You’re absolutely right. I completely dropped the ball in writing spec text and put it on the agenda. I am not really eligible for Stage 2 here. But – no. I just forgot. But the answer to your question on the semantics from the spec point of view, it will do nothing. It says return undefined and there’s an implementation – implementer’s note, we expect you to yield the CPU.

WH: Yes. I like this proposal. I think just the documentation is a bit of out of order. The documentation includes the clamping behavior, and as you said, that’s gone.

SYG: Completely right. Yeah.

WH: I will support this once you update — add spec text and correct the documentation.

SYG: Okay. Given that, I withdraw for Stage 2 consensus. I didn’t prepare the spec text. Philip, that sounds like a naming question. Please open an issue and we can deal with that.

Conclusion

  • Atomics.microwait withdrawn for Stage 2 consensus, for now.