-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Effect of Hack proposal on code readability #225
Comments
Actually, it's a bad manner to tweak the binary operator itself for any purpose that essentially breaks the algebraic structures, and without messing up the math, we can compose "advantage" with the basic operator if we want to. Mathematically, @kiprasmel made an excellent presentation, did you read?
Another insight
In software design, we must know: or more generally, When we know the two are equal:
The latter should be chosen for robustness, and finally,
I've just posted |
@stken2050 Thanks for pitching in. So what I got from your post is that the usage of lambda expressions on the RHS of the pipe operator would open up the possibility of accidentally referencing variables that happen to be in scope, and in the worst case developers might try to abuse that scope for implicit state management between invocations. That's indeed a good argument against the usage of lambdas there, and it is also another argument against the Hack proposal specifically, as its ability to embed arbitrary expressions makes the likelihood of such maintainability issues even larger. Hope I understood correctly :) |
Actually, yes, and I'm very sorry about that I misunderstood your context. In any way, I firmly believe if we pursuit some advantage on top of the operator, we should compose it. First we provide the minimal and pure operator |
@arendjr One clarification from your blog post: the Hack proposal doesn't introduce Otherwise, the only only somewhat-amusing note is I find all of your examples with Hack pipe an improvement over the status quo versions. The only one I'd speak to specifically is the Beyond that, I appreciate the examples & exploration. |
Yes, good clarification!
For that example specifically, I would rather write it like this:
No pipe necessary at all, and in my opinion it’s clearer than both options given in the example. It also achieves the separation between processing and actual fetching you were aiming for. And this is getting to the heart of why I feel the Hack proposal might be actively harmful. It encourages people to “white-wash” bad code as if it’s now better because it uses the latest syntax. But not only do I not find it an improvement, we have better options available today. And people will mix and match this with nesting at will. It will not rid us of bad code, but it will open up new avenues for bad code we had not seen before. That’s why in the end I think we’re better off without than with. For the F# proposal, I don’t see so much potential for abuse, hence why I’m still sympathetic to it. But if we could prohibit lambdas from even being used with it, I would probably be in favor. It’d make it even less powerful, but I do believe this is a case where less is more.
Yeah, absolutely. I would gladly make the adjustment if the F# proposal were to be accepted, but it’s true it’s a pain point. I do think once libraries had their time to adjust, we might come out for the better, but I cannot deny it will cause short-term friction. If the Hack proposal were accepted however, things might initially appear more smooth. But once we have to deal with other people’s code that uses pipes willy-nilly, I fear we may regret it forever. |
This is a reasonable take, even if I disagree with the cost/benefit calculation.
I think this makes the use-case for F# unreasonably narrow which would really limit the utility of the operator.
My general perspective is forcing mainstream JS to adapt to this API will be more painful than asking functional JS to adapt to Hack. Note that if you were to adapt to the curried style you reference in the blog post, you would no longer be able to use |
My idea was to just check for the type of the first argument, if it's a number, return a lambda. If it's a string, continue as before. It's JS so why not :) But alternatively, I could just ask the FP folks to use |
I'm still surprised to see claims against the minimal/F# style, saying that it will force curried style. It won't, just as PFA proposal is a universal solution to those who worries about the arrow function noise on all three cases: map, then and pipe. If PFA is not ready, and we don't want minimal/F# without PFA, then let's wait, instead of introducing an irreversible Hack pipes. Pipe, in any resource you'll find, is defined as a composition of functions. From UNIX to Haskell, that's what it is. I really think we should hold that proposal ASAP and continue discussing it. |
@SRachamim The loss of tacit programming is explicitly a complaint against Hack (see #215 and #206). If the intention is to just use lambdas for everything, then F# is at a significant disadvantage vs Hack. See #221 for concerns about PFA's advancement. |
I think it's worth trying to define what "readability" can mean across paradigms and across time. Functional Programmers worry about mathematical consistency:
You really don't have to be on one side or the other to appreciate that Hack robs the JavaScript FP community of a certain kind of transferable mathematical thinking. "Readability" to them means something very different to "avoiding temporary variables". F# affords another kind of functional compositional to JavaScript. Should it have it? JS is of course a multi-paradigm language, but despite the currently declarative push it's no secret that it's still very much used in an imperative way, so perhaps Hack suits it "better"? But JavaScript also has -- by pure luck of history thanks to its original author -- first-class functions. It's baby steps away from being completely FP friendly, and it has an active FP community that seems acutely aware of this. I'm really trying to avoid falling on one side or the other here, but after that throat-clearing I contend that:
In other words, Hack is the readability ceiling for imperative programming, whereas F# raises the ceiling for compositional readability for those working in FP, either now in in the future. |
Lambda is a (apparently scary) name to something that we're all comfortable with: a function. It's already everywhere: class methods are functions, Every concern you'll have about minimal/f# proposal can be applied to anywhere else you use a function. So why won't we tackle it all with a universal PFA solution? Why won't we use And as I said, if PFA is stuck, then it's not a good reason to introduce Hack. We should either wait, or avoid pipe at all (or introduce minimal/f# style anyway). |
Does this imply that Elixir isn't mathematically consistent because |
Elixir pipes allow you to take a lambda or function. Whether it pipes to the first value or the last value is an interesting stylistic question, and I get what you're referring to. In javascript our functions can be thought of as taking a tuple. This adds the wrinkle of, are you injecting into the first value of the tuple, or the last since there is an isomorphism between (a * b * c) -> d and a -> b -> c -> d. The elixir is still mathematically consistent in my opinion with the general idea, because we have various flip functions which could if need be take an a -> (b * c ) -> d and turn it into (b * c) -> a -> d . So they're algebraically equivalent, so we don't really need to worry about it. If we get the a -> (b * c ) -> d , we can just use a flip function to get (b * c) -> a -> d and vice versa. What is a concern is whether I have to learn a new construct to reason about the new operator. The functional community will simply avoid placeholder pipes, because they're pipes that don't take functions, and we think in terms of passing functions around. You can call it superstitious or close minded, but it is what it is. I don't think the fp community will ever use placeholder pipes because they are difficult for us to reason about in terms of edge cases. I actually agree with @arendjr in that I would prefer to have no pipe rather than hack pipes, and many many functional devs I have talked to this week have agreed with that. As excited as we are about pipes, we want to avoid things which are going to make code harder to reason about the potential consequences and read. Functions and lambdas are easy to understand and read, expressions by contrast are very open ended in what they can accomplish and what they mean. Expressions are very powerful but we fp devs often explicitly trade power for the ability to think clearly. In my opinion javascript is already very open ended, and some creative constraints can make things easier to use. |
Yeah, my general point is that whether the syntax of a given language specifically matches the syntax of the math is less important than whether it adheres to the values/principles that math.
Nothing about Hack pipe prevents you from passing functions around.
The body of a lambda, specifically arrow functions in JavaScript, is an expression. Most of the open-ended things you can do with Hack pipe you can do in the body of a lambda (save |
[Citation needed] |
The problem is that in real life I have to deal with other people's code, and expressions are pretty open ended. If you constrain expressions to exactly what a lambda does, that's great but now I have a lambda that is not obvious that it's a lambda, it's confusing. Therefore, yes, in my codebase with myself alone hack pipes are fine, but in real life I have to deal with my peers, and I will tell them "Please avoid the placeholder pipes, they are confusing and may not work as you think" because people in practice will do all kinds of stuff that are, well confusing. My problem with placeholder pipes is explicitly that they are too open ended. The very thing you espouse as a killer feature, makes it to me a DOA feature. I still after all this discussion for example, don't know if anyone fully understands the scope of a expression pipe. If my coworker declares a var in an expression pipe, is that global, or local like a function? Will my coworker know? They will not, because I don't know, and neither will most people. It's an entirely new construct with new expectations that could be frankly anything. It's why I don't think this proposal works well for javascript, because it's actually subtly complicated and javascript is already quite complex. Remember even if you can give me a nice succinct answer to this question, consider that people will forget, because it's unintuitive to have expressions have a scope. Do those placeholder expressions have a this? Can I add attributes to it or treat it like an object? There's so many ways that people will use placeholder pipes to make weird cursed code and we both know it. By contrast with function pipes I can just refactor any sufficiently cursed lambda to a named function and know that the scope will be exactly the same, period. If it's a lambda, I can just say, "It's a lambda" and they get it. I don't think awaiting mid pipe is actually a good feature, and I think it will just be hard to read because awaits normally go at the beginning of the line. So @arendjr if you're worried about cursed lambdas, they would be trivial to refactor into named functions. Probably even a action in vscode and your favorite editor to do that. Whether such a thing is even strictly possible in expressions is frankly unanswerable, especially when you consider the scope of all the things which may be added yet in the language. |
All of the built-in types, all of the Web apis, none of them are designed around unary functions. Even tools as basic as
It would be a SyntaxError because |
@mAAdhaTTah You are worried about unary functions on a pipe, but why aren't you worry about unary functions on every other method like If you worry about unary functions, why not pushing the PFA proposal, which will allow you to use This claim is not a reason to push the Hack proposal. |
I already referred you to #221 to discuss the issues PFA had advancing. |
@mAAdhaTTah New syntax must not imply inventing new unexpected ways of reasoning. A great syntactic sugar is one which leverages existing familiar constructs. You can describe the minimal proposal as And again, some variation of PFA is the solution to your concern about curried/unary functions. Don't leak it to other proposals. Where PFA proposal stands should not affect the nature of a pipe. Pipe should be a simple function composition, whatever function means in JavaScript. If you feel functions in JavaScript need another syntactic sugar - that's a different proposal! |
You’re still making the assumption that it is desirable to call anything and everything from pipes, where I think I made it abundantly clear that is not a desire in the first place. What you’re suggesting is a new syntax for any call expression in the language, regardless of asking the question whether we should want that. Adding two arbitrarily interchangeable syntaxes for the same is not a benefit to me, it just complicates things. It complicates the language for newcomers, it adds endless discussion about which style should be preferred when and where, it promotes hard-to-read code for which better alternatives exist today and ultimately it has very little to show for it. Someone on Reddit replied to me:
And yet that seems to be exactly what you’re encouraging. So I did ask this question to myself, and I think that No, we should not want this. And yet I keep reading statements from champions making sweeping generalizations that this is “beneficial to the broader language” or “beneficial to everyone” and I don’t feel represented by this. And frankly I suspect there might be a large underrepresented, silent majority that will not feel their concerns were represented if this proposal is accepted. |
I am confident there are a lot of unexplored edgecases of expressions which my coworkers or teammates in projects will use, and abuse. For example okay so there are expressions not statements, I get it now, what is var x = 10 |> this.whatever = 'yeah'
// x is 'yeah' and yeah you can easily forget to put in the placeholder entirely and imo it is not at all obvious.
var y = {lets: 'go' } |> ^.lets = 'hooray'
//y is 'hooray' and is no longer an object This is extremely unintuitive, and will definitely blend the mind of a newbie into fresh paste. By contrast, in a function or lambda it's all about returning stuff, there's a clearly communicated path, you can even use braces and a return statement to be explicit (and I often do). In the first example, I don't even know what I'd expect, but probably either 10 to be passed through unaffected, or undefined. You better believe my coworkers not understanding pipes will put every valid expression in there. At least I kinda understand functions, they are simpler, expressions can be so many things. There is a pre-existing guidance on how/when to use functions, I have no such guidance for placeholders. |
From the readme
So it would be pretty difficult to have this mistake go unnoticed. Any sort of linter would point it out and trying to run the code would just crash with a syntax error.
I don't see how the F# version is any different var y = {lets: 'go' } |> _ => _.lets = 'hooray' I'm also wondering where yall find these crazy coworkers that bend over backwards to write as unintelligible code as possible, and can't be leveled with. |
The difference is people expect to intend to return a value with a lambda. What value are you intending to return there? By contrast an expression is a lot of things. The root of it is, you're treating two very different things like they're equivalent. I guarantee you when this gets released we will see ^.lets = 'hooray' on stack overflow with placeholders, and we won't with lambdas, because lambdas have been in the language for several years now, and function expressions several years before that. People have a general intuition of what is expected with function expressions and lambdas, that will have to be learned with placeholder expressions. Specifically, they're your bootcamp grads, fresh out of college kids, mom and pop shops, and sometimes your boss who mostly writes Java/C# but thinks javascript is easy. You can explain things to them, but then someone new will come with the same misconception, or perhaps the same person who forgot, so I'd really appreciate it if we make sure to value making things intuitive the first go around. If you think I'm being absurd, okay, but like it's going to happen, and it would be a lot less likely to happen if we leverage an existing construct that is more generally understood. |
Right, it's not like when they added OOP syntax sugar to JS they expected everyone to wrap all their existing code in Classes or everyone to use Classes for all new code. It's an option if you want to write that way. They didn't pull back from OOP because devs would have to write code differently to interface with it. Same goes for adding FP syntax sugar, why do we step back from true FP principles like function composition and currying because someone might have to call a curried function or wrap their code? If you don't want to use the features -- you don't have to! But don't kneecap the language with Hack because to use it you have to write new/different code! This argument doesn't make much sense. |
I'm not. I am arguing that the universe of things you can put in a Hack pipe is far greater than the universe of things you can put in an F# pipe. I would by extension argue that the universe of things that benefit from the Hack pipe is far great than F# pipe.
I'm not sure what you mean by this, especially insofar as
Amusingly enough, this is how I feel every time I pull up a codebase that uses Ramda! I used that library extensively for like 2 years, it's an absolute nightmare to debug because there's no reasonable place to put breakpoints, and it's impossible to explain to non-functional coders because there are too many concepts to explain. So I don't use it anymore, which is a shame, cuz there are significant benefits to the library but they're impossible to integrate without taking on a lot of cognitive overhead. With syntax we can put a breakpoint in between individual steps in the pipe, similar to how you can put a breakpoint after the arrow in a one-line arrow function. Including an explicit placeholder means I can hover it like a variable & get its type in VSCode. It's easier to debug when its integrated into the language proper, and we still get many of the benefits of Ramda's Let me provide an illustrative example, loosely modeled on an actual example of something I had to do at work. At a high level, I have some user input that I need to combine with some back-end data which I then need to ship off to another API. Using built-in // Assume this is the body of a function
// `userId` & `input` are provided as parameters
const getRes = await fetch(`${GET_DATA_URL}?user_id=${userId}`)
const body = await getRes.json();
const data = {...body.data, ...input }
const postRes = await fetch(`${POST_DATA_URL}`, {
method: "POST",
body: JSON.stringify({ userId, data })
});
const result = await postRes.json();
return result.data This sort of data fetching, manipulation, & sending (all async) is a very common problem to solve. Any of the Sure, maybe in the real world, I'd use const fetchedData = await api.getData(userId);
const mergedData = { ...fetchedData, ...input };
const sentData = await api.sendData(userId, mergedData);
return sentData; Compare all of that to Hack: // The `fetch` example:
return userId
|> await fetch(`${GET_DATA_URL}?user_id=${^}`)
|> await ^.json();
|> {...^.data, ...input }
|> await fetch(`${POST_DATA_URL}`, {
method: "POST",
body: JSON.stringify({ userId, data: ^ })
});
|> await postRes.json();
|> ^.data
// Or the axios example:
return userId
|> await api.getData(^)
|> { ...^, ...input }
|> await api.sendData(userId, ^); This is significantly clearer to see, step-by-step, what's going on. Admittedly, I could combine steps (maybe the data merging can happen inline with the API call; it would save me from writing There are even like small cases, where I've got a small function call with maybe an expression in it, and then I need to call one other function after it to post-process. I would love to be able to just do |
I actually agree with you here: it is. And it's the way I like it :)
Sure, but this is JavaScript. We don't really do ADTs here. I don't consider myself part of the FP crowd, and I never advocated for tacit programming. I'm slightly sympathetic towards the F# proposal because I see a narrow usecase for it, but you're right: it is a narrow use-case and given the downsides I'm skeptical we should add it. But Hack doesn't alter my position that I only see a narrow use-case. Just because it suddenly allows usage everywhere, never convinced me that we should. If it's about ADTs, I much prefer the way Rust handles them. Not by going overboard to the functional side, but with a very plain "conventional" method-like notation. Incidentally, Rust as a language is totally fine without pipes. And so am I :) |
As mentioned ad naseum, those programming languages have features that make writing & debugging a declarative
Sure, if you want to maintain that narrow view, you're welcome to, just as you're welcome to ban the Hack pipe from your codebase because you don't believe in pipelines. But I think this is a narrow-minded view of the language.
We do, actually, use ADTs in JavaScript. but they're not widespread because there's a bunch of features like point-free pipelining that make them difficult to use without going whole cloth into functional programming. What I expect (and as I explained in my previously linked comment) is for ADTs to become more broadly useful outside of tacit programming with a Hack pipe, because I'm not an advocate for tacit programming either. |
JavaScript is not auto-curried, but that's not a barrier. The Keep it simple (KISS): |
@mAAdhaTTah I know you were talking about breakpoints specifically with point-free use of Promise If we got that variant of the pipeline, I'd fully expect that we'd at some point be able to step through the pipeline steps and hopefully "step into" the function that is created in RHS when the LHS is passed in there. |
Apologies for giving off a bad tone. My argument stands, tho: huge bodies of common practice suggest that naming individual steps is rarely desired or used by people using today's variations on code-linearization (method-chaining and pipe()); when it is, it's generally done by cutting the chain at a point where the value is semantically meaningful to the program, assigning to a temp variable, and then working with that variable in a following statement. For method-chaining, this is the only realistic way to name intermediate values, and it seems to be fine. In theory So to restate my argument: either all of this is bad code, and we should encourage people to name intermediate values a lot more with the pipeline operator (which requires some stronger justification), or all of that code is fine, and the current proposal's lack of topic naming will be similarly fine. I don't think there's a reasonable path between these, where the existing code is fine with minimal intermediate variable naming but the pipe operator code needs to encourage/require more intermediate variable naming.
Expressions are not meaningfully more complicated than functions. In particular, you can put arbitrary expressions in a function call, as arguments; all that tacit style demands is that the top-level operation that actually uses the topic value be presented as a function. And with arrow funcs, that's not even a meaningful restriction. Again, your point could be made if it was clear from practice that people using The overall issue here is that your argument, carried from the OP thru the rest of the thread, is presented as if it's against pipeline, but it's actually a fully generic argument against any code-linearization construct that doesn't name the topic variable. Going back to the Deno example that keeps getting tossed around: const file = await Deno.open('intro.txt');
const data = await Deno.readAll(file);
const text = new TextDecoder('utf-8').decode(data);
console.log(text) If these APIs weren't async, they could and almost certainly would be written as a method chain, in practice: const text = Deno.open('intro.text')
.readAll()
.decode(new TextDecoder('utf-8'));
console.log(text); The exact API shape could vary, of course, but a file exposing a "readAll()" method and a buffer exposing a "decode()" method are both the sorts of very common API design choices one wouldn't bat an eye at normally, and it would lead to this perfectly reasonable and readable code. If JS had made a Rust-style syntax choice for its await operator, these could even stay async without breaking the method chaining: const text = Deno.open('intro.text').await
.readAll().await
.decode(new TextDecoder('utf-8'));
console.log(text); Instead, due to some design choices that JS and the Web Platform has made, we have prefix- So, if you want to argue that the same code written with the pipe operator: const text = await Deno.open('intro.text')
|> await Deno.readAll(^)
|> new TextDecoder('utf-8').decode(^);
console.log(text); is bad, you have to do some pretty significant work arguing why it's different from the above hypothetical method-chaining example that we could (and do, in similar libraries) write today. Or bite the bullet and argue that they're both bad (bad enough that we should be encouraging a different coding style via language design, in spite of this pattern's heavy existing usage). Either of these arguments are possible to make, but I believe they're very difficult to argue convincingly. What I'm pushing back against is treating pipelined code as a fundamentally new concept to be wary of. |
Just to throw it out there. If debugging is this big of an issue, why hasn’t there been a proposal for say .map(Function.aside(() => { debugger; })) or in the above examples: await Promise.resolve('intro.txt')
.then(Deno.open)
.then(Deno.readAll)
.then(Function.aside((x) => { console.log(x); debugger; }))
.then(data => new TextDecoder('utf-8').decode(data))
.then(console.log) Which is rougly: Function.aside = (fn) => (x) => {
fn(x)
return x;
}; Prior arts include Rust’s |
@tabatkins to me, it is different. I also agree my argument it's probably not really strong, but I think it's still a valid concern. My position on Deno's is that the original code should be preserved, without trying to shoehorn pipe in there. As I've said previously, each intermediate step giving it a name gives me information so that I don't need to "internally parse" what the expression on the right-hand-side does. I see "file", "data" and "text" and I know what's happening without even having to think about promises. The chainable api you proposed also has this advantage. const text = Deno
.open('intro.text')
.readAll()
.decode(new TextDecoder('utf-8'));
console.log(text); Just by reading "open", "readAll" and "decode", which is neatly aligned, I can already imagine what's going on. Adding in rust's await does add a bit of verbosity but it's ignorable. Pipes, however: 'intro.txt'
|> await Deno.open(^)
|> await Deno.readAll(^)
|> new TextDecoder('utf-8').decode(^)
|> console.log(^) On every step there's much more going on when I try to read it. I need to read through the whole line, find the Granted this argument is weak for this case, and probably with some time of working with Hack pipes it would make it easier for me to read by getting used to it, but I can imagine other examples where this issue becomes more and more relevant (places where the palceholder is used twice, expressions with lambdas with the placeholder in, or nested pipes). Hack pipes allow and sometimes promote these kind of usages, which I think they are detrimental. To me pipes should be used instead on places where you'd want function composition, and/or for libraries which enhance something on every step (e.g. rxjs where you pipe an Observable through operators and each one of them transforms it to another Observable) |
@runarberg That seems reasonable and would be worth opening an issue in that repo to discuss further. |
I think @voliva already made a good representation of my argument here. |
I'm not going to argue that this wouldn't be a nice feature, but I would not call log/debugger-statements "jumping through hoops". It's a completely unremarkable practice to human-binary-search your way around problem code with breadcrumbs. Obviously, any talk about debugging comes with a huge dose of YMMV, but since it's usually clear where bugs start and end, doing an HBS over the problem area is what I normally see done. I believe this is because we're broadly (YMMV!) less concerned with the wrongness of a value at a specific point as to how it got to be wrong in the first place.
That happens, but committing breadcrumbs is a small tax against the fixing of the bug in the first place, and is not something that a pipeline operator of any flavour is going to save us from. Pre-commit hooks, eslint-plugins, and logging utilities that only produce output in prod, either via a flag, or by being behind a conditional that can be tree-shaken away during transpilation. These are not practices that are going to be made redundant by a pipeline operator.
But you're really not prevented from adding a breakpoint in the devtools, are you? You're just prevented from placing it specifically. Which can be useful. But it's also useful to place a breakpoint before the problem area to watch the behaviour of the code unfold by stepping through it.
But a nice side-effect of point-free is those functions can sit in their own files and have unit-tests set against them. Again, big YMMV, but going into a unit-test to confirm the behaviour of a function can be expediting to solving a problem, and such tests can be run independently of your browser session. Sometimes not of course. We want to check the behaviour at the point-of-use because the unique integration is where the problem is. But again, HBS'ing over such things with breadcrumbs is pretty BAU stuff and very transferable. Again, I'm not saying it isn't handy to pick a specific breakpoint with Hack. But we have decades of shared experience already with debugging functions, which is why it's easy for all of us to come up with examples of how to do it. (I'm also more than a little curious as to why some of our devtools aren't completely up to the task of setting breakpoints in a 6yo ES6 language feature, or just chaining generally, although we're off-topic enough as it is.) |
To me the debugging argument seems more about a deficiency of the debugger, than the syntax. With arrow functions, a newline after if (!good(value)) return "bad";
// how to break on the return to inspect the value? |
@lightmare Inline breakpoints are a thing, so that's not really the issue The problem is that when you have |
I see. But that issue does not apply to pipeline operator in any form (because it doesn't "build" the whole chain in advance like |
Yes, this whole breakpoint discussion started with the suggestion that we don't need |
Just to put another perspective and trying to emphasize how simple a minimal Worrying about where to put a a |> f |> g |> h is like worrying about where to put a h(g(f(a))) With the exception that |
Well, that is absolutely something that I do worry about today, and one of the reasons why I'd almost always rather split that into const effed = f(a)
const geed = g(effed)
const hd = h(geed) even if those variable names add no value. That said, I don't think there would be any problem debugging synchronous minimal pipelines either. |
My point is that any concern about debugging ergonomics on a The same we should do with any other concern, like the ergonomic of |
@SRachamim "minimal" F#-style (just plain function application, no special cases for await/yield) has been rejected twice over already; several committee members said they would give blocking objections if the proposal didn't handle I'm going to ask you to stop suggesting it; talking about it in this repository is an unproductive waste of time. Further mentions of it will be marked as off-topic (and continuing to file off-topic comments can eventually lead to CoC enforcement, tho I don't anticipate that being an issue here). |
This comment has been minimized.
This comment has been minimized.
Heya, this is a friendly generalized reminder to everyone to please follow the TC39 Code of Conduct. Please be respectful, professional, friendly, and patient with each other, as per the Code. We all should try to assume that everyone else is acting in good faith, and we should be open to learning from each other. Let’s keep it polite and civil. ^_^ Thank you, everyone! |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
I want to start by thanking you for your initial investigation, the overall conversation, & engaging and developing our ideas on this. Having discussed the Hack pipe a lot over the past few weeks, it can be incredibly frustrating to engage in these conversations and not be heard, so I appreciate the good faith back and forth. I want to start with the nice things cuz you're not going to like my conclusion: if that's the best argument against the readability of Hack pipe, I'm pretty comfortable with the idea that Hack pipe is a significant net readability win. Having toyed with these examples, the thing I keep coming back to is this line:
[emphasis mine] It is contradictory that variable names like Using these two code snippets: const text = Deno.open("intro.text")
.readAll()
.decode(new TextDecoder("utf-8"));
console.log(text); 'intro.txt'
|> await Deno.open(^)
|> await Deno.readAll(^)
|> new TextDecoder('utf-8').decode(^)
|> console.log(^) Setting aside the API changes required to make the first snippet work (both the method chaining & removal of async), these two snippets communicate the same amount of information. If the latter were to remove the Comparing that to the base temp variable version, I still think this is a clear readability win because it explicitly communicates its fundamental nature as a pipeline. Lastly, this evolution of the pipeline with your new requirement: const prefixIfModified = file => Deno.mtime(file) > recentlyModified ? 'UPDATED: ' : '';
const decodeFile = async file => await Deno.readAll(file) |> new TextDecoder('utf-8').decode(^);
'intro.txt'
|> await Deno.open(^)
|> `${prefixIfModified(^)}${await decodeFile(^)}`
|> console.log(^) This is a huge win over the temp variable version with current APIs. If we started with that version, then stuck an By comparison, breaking down the transformation into a set of steps, extracting & implementing those steps, and composing that back up together is an incredibly powerful (and readable!) approach to solving a problem. It keeps the amount of context you need to keep in working memory down, as each step can be understood in isolation and directly communicates what exactly is happening (renaming I think the major concern this doesn't address is developers over-contorting code that isn't really a pipeline into a pipeline. Having spent some time trawling through my code at work (a fairly pedestrian React/Nextjs application), I didn't find a ton of places where I could make those kinds of contortions. The places I saw potential usage of Hack pipe are either "in the small" (e.g. a 2-3 expression sequence with useless temp vars or nested expressions) or explicitly "pipeline problems" (e.g. the async sequence I mentioned above, or a data transformation preparing BE data for FE view code). That said, I accept this as a potential downside but personally think the risk is much smaller than the reward, given what I see above as clear & significant readability improvements. In general, to expand on Tab's point, Hack pipe makes the clarity & readability of method chaining available to free functions without needing to shoehorn APIs into that style. Given the similarity to existing, readable concepts in JavaScript, I don't find the case against it's readability convincing and see the improvements to the above snippets as a strong case for Hack pipe's net readability improvements. I don't necessarily expect this to be fully satisfactory to those who don't see the above as improvements, but I figured it would be worth laying out the case one last time before stepping back from this particular debate. Again, appreciate the good faith engagement on this topic! |
@mAAdhaTTah Thanks for the extensive reply! I believe your position to be balanced and well-reasoned even if I don’t agree with it. I’m still slightly worried about the schism I expect this to create between the people who will embrace Hack pipes and those that will reject it for readability, familiarity or other reasons. That said, I don’t have much else to add :) Cheers! ❤️ (Should we close this issue?) |
@arendjr Yeah, if we've fully explored this issue, you can close it. Cheers! ❤️ Note to @lozandier: if you return and still want to respond, my suggestion would be to comment on #215 (I think that one is most relevant to what we were discussing) and link back to what you're responding to or create a separate issue. |
Over the past weekend, I did some soul searching on where I stand between F# and Hack, which lead me into a deep-dive that anyone interested can read here: https://arendjr.nl/2021/09/js-pipe-proposal-battle-of-perspectives.html
During this journey, something unexpected happened: my pragmatism had lead me to believe Hack was the more practical solution (even if it didn't quite feel right), but as I dug into the proposal's examples, I started to dislike the Hack proposal to the point where I think we're better off not having any pipe operator, than to have Hack's.
And I think I managed to articulate why this happened: as the proposal's motivation states, an important benefit of using pipes is that it allows you to omit naming of intermediate results. The F# proposal allows you to do this without sacrificing readability, because piping into named functions is still self-documenting as to what you're doing (this assumes you don't pipe into complex lambdas, which I don't like regardless of which proposal). The Hack proposal's "advantage" however is that it allows arbitrary expressions on the right-hand side of the operator, which has the potential to sacrifice any readability advantages that were to be had. Indeed, I find most of the examples given for this proposal to be less readable than the status quo, not more. Objectively, the proposal adds complexity to the language, but it seems the advantages are subjective and questionable at best.
I'm still sympathetic towards F# because its scope is limited, but Hack's defining "advantage" is more of a liability than an asset to me. And if I have to choose between a language without any pipe operator or one with Hack, I'd rather don't have any.
So my main issue I would like to raise is, is there any objective evidence on the impact to readability of the Hack proposal?
The text was updated successfully, but these errors were encountered: