-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce memory overhead of tracking #3833
Reduce memory overhead of tracking #3833
Conversation
🦋 Changeset detectedLatest commit: fcbcf17 The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
Ah sorry, it's been a while since I last contributed to OS, I forgot that creating a PR on my fork will create the PR on the main repo. 😆 Let me get some memory profiling numbers first (in the coming days! and update the description. |
8ce69b6
to
a373160
Compare
@urugator 👋 WDYT? I'm not sure who to ask for review here but you've always been awesomely responsive so I thought of you. 😅 I'd be keen to get this in since I'm hopeful it could help remove some GC-related jank during intensive interactions. |
I am wondering about 2 things:
I will try to notify @mweststrate to provide feedback/blessing. |
Great feedback, thanks, @urugator !
Good question. We use
Excellent point. 👍 And I agree. I think the inital-run preallocation is the one that might make sense (because it can prevent the runtime from reallocating the array). But then again, it's unclear to me whether in average case what we save by avoiding the array reallocs isn't lost by the time we spend in GC. 🤷 Let's see if @mweststrate could shed more light on what the thinking was when we added the preallocation logic (which was...8 years ago so I hope his memory is better than mine 😆 ) |
Sorry for the late follow up, I've been a bit swamped lately. Did you also run the performance suite in the repo @realyze ? But I think you are right, this seems overly aggressive. I think either deps are typically pretty stable, and if not, they can be hugely varying (e.g. a conditional filter over a large map or so). I suspect you could even merely reduce it to |
Thanks for the reply, @mweststrate ! I ran |
Yeah that is correct, did you compare the numbers before/after? (I wouldn't
expect something stat sig, but just in case)
…On Fri, Feb 23, 2024 at 12:26 PM Tomas Brambora ***@***.***> wrote:
Thanks for the reply, @mweststrate <https://github.com/mweststrate> ! I
ran yarn mobx test:performance (I assume that's the perf suite) and it
all passed.
—
Reply to this email directly, view it on GitHub
<#3833 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAN4NBCQASTOEEFP7HI2XYDYVB4FRAVCNFSM6AAAAABDOBGWB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRRGE2TSNRYGE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
packages/mobx/src/core/derivation.ts
Outdated
derivation.newObserving_ = new Array( | ||
derivation.observing_.length | ||
? Math.ceil(derivation.observing_.length * 1.2) // 20% extra room for variation in deps | ||
: 100 // Establishing initial dependencies, preallocate constant space. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is derivation.observing_.length
a good indicator of "initial"? Ideally we would like to avoid prealocating anything for non-initial runs if there are no observable deps, eg:
const Cmp = observer(() => <div>no-observable-deps</div>)
Maybe there is an opportunity to bail-out even earlier is this case, dunno.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm, good question! I think we wouldn't re-run trackDerivedFunction
in case there are no dependencies, right? (Because nothing "inside" would trigger a rerun and external calls would get the cached value.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, deps can still change based on something not observable like prop/state.
The main point is that zero deps isn't special case - we assume the same thing - the number of deps don't change significantly between runs. Therefore we preallocate array of the same size - in this case an array of zero length. Probably not much else we can do. But we need to somehow differentiate between initial run (preallocate constant space) and non-initial run (preallocate zero). Probably we can just change the condition to: derivation.runId_ !== 0
OK, here's my results based on running the proxy perf suite 10 times and recording the time as reported by the shell Both So at least within the context of the perf suite, this change seems to make |
This looks great, thanks for the detailed measurement. LGTM! |
I don't want to delay this PR any further, just a thought I want to leave here: |
I agree. On the other hand glancing at the code, there are some non-null assertions in the codebase ( |
One last thing, can you add changeset please? |
Ah neat, done via fcbcf17! |
Thank you! |
@mweststrate / @urugator I'm not sure how the |
Sadly, it tends not to work: |
Ugh again. Sorry was focused on Immer last week, but will try to take a
look at Friday.
…On Wed, Mar 13, 2024 at 5:31 PM urugator ***@***.***> wrote:
how the mobx npm release process works
Sadly, it tends not to work:
https://github.com/mobxjs/mobx/actions/runs/8213813479/job/22465634950#step:6:65
—
Reply to this email directly, view it on GitHub
<#3833 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAN4NBHFJUUL3KCTCGK3T6TYYB5NTAVCNFSM6AAAAABDOBGWB6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSOJUHA4TKOBZGM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@mweststrate gentle ping please 😆 Anything I could do to help? |
ok think I found the culprit |
TL;DR Preallocating a
size + 100
item array when tracking dependencies seems overly generous and can lead to more frequent garbage collections, causing jank. This PR replaces that by preallocating same space used by dependencies in the last run.Example
Let's consider this scenario where we read 1M computed values in an
autorun
.A million may seem like a lot but if we read e.g. 2000 computed values in every frame of an interaction and the interaction takes 5 seconds, we get 60fps * 5s * 2000 reads per frame and we're at 600K already. So: not super unrealistic, actually.
I've done some testing on our Canva codebase and the overhead of preallocations during some interactions (e.g. panning) is ~98% (i.e., we only use 2% of the preallocated space, rest gets GC'd).
Impact
Memory profiling with mobx production build from CDN, I get ~430MB allocated when running.
__main()
(i.e., we have huge swathes of memory that'll need to be GC'd). With this change, this drops to ~36MB.Devtools profiler also (unsurprisingly) reports shorter time spend in GC for the build with this change (37ms vs 57ms); overall the
__main()
function runs ~30% faster with this change (compared to current main).Of course, this is a lab scenario so the difference in real-world cases would presumably be different. However, I think it's quite reasonable to expect that in P99 case the number of dependencies between derivation runs doesn't increase dramatically.
Code change checklist
/docs
. For new functionality, at leastAPI.md
should be updatedyarn mobx test:performance
)