-
-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving the implementation of signals #104
base: main
Are you sure you want to change the base?
Conversation
|
✅ Deploy Preview for preact-signals-demo ready!
To edit notification comments on pull requests, go to your Netlify site settings. |
f09e7c3
to
6b0b8af
Compare
It seems that |
Thanks for the PR! It's awesome to see that signals spark a lot ideas. I plan to go through the suggestions later in more detail over the weekend if that's ok.
I'm not sure I understand the improvement as this phrase describes the exact behaviour that is in
EIther way, looking forward to dive in more deeply into the code soon! Thank you for making a PR 👍 |
Consider this testing code which is currently on the signals/packages/core/test/signal.test.tsx Lines 211 to 228 in a4d659e
It uses With my code, it will not re-run the
I am still working on fixing the Preact/React integration, because it also needs to be rewritten from scratch. But the core signals implementation should be (mostly) complete. |
That's not what's happening there. The call count of the spy function isn't reset, so it stays at 1.
Went through most of the changes and I really like where this is going. I wonder if it's easier to land the changes in succession, rather than having to rewrite everything to completion before being able to merge them. I think some of the changes in the PR could be merged right now if extracted into a separate PR. I'm mainly worried that it might lead to way more merge conflicts than necessary as there is continued development on
I'm a bit unsure about going with My assumption is that 1)-3) are changes that don't require discussions and are straightforward to merge. With 4) I could see that becoming a discussion point, which might take a while until everyone reaches a shared agreement. My hope is that 1)-3) also don't require any changes to the framework adapters which should further mitigate the risk that changes take longer to land or the risk to run into unforeseen things. What do you think? |
But it was reset by With my code, that test fails, saying that the That's why I had to update some of the tests to explicitly use
I think it would be hard to split the changes into smaller changes. So many things are intertwined together in the implementation. But I'll see if it's doable.
I'm okay with removing the WeakRef support, but WeakRef is supported by 91% of browsers, and is supported by all modern browsers. If needed, we can create a WeakRef polyfill which just maintains a strong reference without disconnection: class WeakRef<T> {
private _value: T;
constructor(value: T) {
this._value = value;
}
public deref(): T | undefined {
return this._value;
}
} This will cause a memory leak on older browsers, but there's nothing we can do about that. Or, alternatively, users can choose to use a WeakRef polyfill if they wish. |
Apologies, I stand corrected. It's maybe a lesson for me not to do code reviews close to midnight 😬
Happy to lend a hand with that if you desire.
91% is a fantastic outlook, but dropping those 9 percent, especially older iOS browsers would be a huge loss in revenue in the e-commerce world. It's not public, but we know that Preact is used for the store frontend for several big e-commerce companies. So everything in the Preact ecosystem has to be a little conservative compared to other projects when it comes to browser support. |
That's an odd decision. Many many many libraries don't work in older browsers like IE. When a project needs to support older browsers, they use transpilation (like Babel) and polyfills. That's standard practice. It's the choice of the project, not the library. By using a polyfill the project is able to use modern libraries while still supporting older browsers. It's unreasonable to expect every library to support old browsers, because polyfills exist. And because Preact uses ES6 modules, projects need to use some sort of transpilation for older browsers anyways. If your concern is with the pre-compiled That's what many libraries do: they provide clean ES6 modules without polyfills, and also pre-compiled ES5 code which has polyfills. |
Preact and the related modules do not require the use of ES Modules or transpilation. We ship ES5 in all but one of the compiled variants in each npm package. There are a few reasons for this:
I'd like to redirect discussion away from the polyfilling topic though, because we're not going to make a fundamental decision about Preact as part of a refactor PR. Regarding the implementation (apologies, I was away last week) - one thing that needs to be considered is that we're aiming to provide a general-purpose reactivity model. The signals core is framework-agnostic, but it's not intended to be a competitor to something like Rx - it's agnostic to allow for additional framework bindings to be created similar to the current Preact one. In all of these cases, we have the advantage of knowing which signals/computeds are used by which components/elements in a tree structure with a well-defined lifecycle (VDOM or DOM). Because our state graph is attached to that existing lifecycle, we can already eagerly destruct the graph and allow the JS engine to do its garbage collection. That works today (assuming we land the bugfix in #117), and is verifiable in Chrome's memory inspector. That said, there are lots of good things in this PR. I just wanted to provide some clarity on why we didn't use WeakRef in the first place, and why I would prefer to avoid it unless absolutely necessary. |
And as I said, it's very easy to just add a WeakRef polyfill to the pre-compiled ES5 code. The amount of extra code is very tiny, and the performance is very good.
I actually found the opposite, that Set performs better than Array in most situations. Perhaps iteration on Set is slower, but add/delete/has are faster, and they're the ones that are actually used.
I don't mind deferring it to another PR, but I disagree that it's a fundamental decision, because the behavior for the end user is identical, it is not a breaking change, it is a purely internal detail.
That works well enough for components which have explicit lifetimes, but not in general, as I showed in my first post. |
Do you have a link to your findings about I think you might have misread my point about behavior - your reply was specifically about developer-observability. In this case, garbage collection is an observable behavior, since failure to GC effectively could easily lead to scenarios where an app crashes or consumes an unacceptable amount of memory. It's okay to disagree about whether something is fundamental. Preact isn't a new project, and has a core team of folks who have guided it for many years who are ultimately responsible for deciding what is fundamental and what is an implementation detail. These things change over time, but that involves a lot of discussion far beyond the scope of a single PR. |
Sure: https://esbench.com/bench/631d30a36c89f600a5701bdc On my machine, using Brave, I get this result: Even for very small collections (5 values)
To be clear, the current code in Here is a simple test case showing the memory leak. The const s = signal(5);
for (let i = 0; i < 100000; ++i) {
{
const c = computed(() => {
return s.value;
});
c.value;
}
} But my code (with If you remove (Incidentally, the
Also to be clear, I was discussing the concern about my code causing preact signals to break in older browsers (which is fixed by adding a WeakRef polyfill to the compiled ES5 code). There isn't any concern with memory leaks, because my code is never worse at memory leaks compared to the current |
Any chance we can get this PR going? |
@MrHBS Sorry, I've been busy, but I do plan to continue working on this. |
It's been a while. While I have no stake in this, it was very interesting reading through the conversation and glancing over the code. Go get it integrated :) |
The extra time that has passed does also mean that the caniuse percentage went up, the 9% that would get the polyfill treatment has become closer to ~5%. I'd find the work interesting even if it got spun off into an API-compatible separate package like usignal that you can replace by using an import map. |
Hi, I have many years of experience implementing FRP systems. I quite like your implementation of signals, but I saw some improvements that could be made.
This is a new implementation of signals, written from scratch. The code is much simpler than the old implementation, and also ~15% faster.
In addition, it fixes a memory leak by using WeakRef:
With this PR,
c
will be garbage collected if it's not used anymore.However, this does cause a couple minor changes to the behavior:
The behavior of
computed
is lazier: now it will only update when calling.value
, it no longer updates automatically when its dependencies change. However,effect
continues to update automatically, it is not lazy.This improves performance, and also creates a clean separation between
computed
(which should be pure / referentially transparent), andeffect
(which can have side effects).The error handling behavior of
computed
/effect
has been changed slightly.With the old behavior, it would only error the first time that
.value
is called, and afterwards it would return the old stale value.With the new behavior, if it errors it will keep erroring every time
.value
is called, until one of its dependencies changes. When one of its dependencies changes, it will then re-run the function again.This behavior is faster, more intuitive, and more semantically correct.