Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: lazy init hoisted vnodes #10959

Closed
wants to merge 6 commits into from
Closed

perf: lazy init hoisted vnodes #10959

wants to merge 6 commits into from

Conversation

antfu
Copy link
Member

@antfu antfu commented May 16, 2024

In large projects with too many hoisted parts bundled into a single file, end users need to pay the cost of initiating the hosted nodes upfront, which might ideal and might lead to slow init loading. This PR introduces a lazy utility to initiate those hosted vnodes on demand.

Thanks to @sxzz @Doctor-wu @ShenQingchuan

Fixed #5256

Copy link

github-actions bot commented May 16, 2024

Size Report

Bundles

File Size Gzip Brotli
runtime-dom.global.prod.js 90.8 kB (+56 B) 34.5 kB (+24 B) 31.1 kB (+27 B)
vue.global.prod.js 148 kB (+144 B) 53.8 kB (+75 B) 48.1 kB (+82 B)

Usages

Name Size Gzip Brotli
createApp 50.8 kB 19.9 kB 18.1 kB
createSSRApp 54.1 kB 21.2 kB 19.3 kB
defineCustomElement 53.1 kB 20.6 kB 18.8 kB
overall 64.5 kB 24.9 kB 22.6 kB

Co-authored-by: sxzz <sxzz@users.noreply.github.com>
Co-authored-by: Doctor-wu <Doctor-wu@users.noreply.github.com>
Co-authored-by: ShenQingchuan <ShenQingchuan@users.noreply.github.com>
@antfu antfu marked this pull request as draft May 16, 2024 17:46
@antfu antfu marked this pull request as draft May 16, 2024 17:46
@KermanX
Copy link
Contributor

KermanX commented May 16, 2024

I am wondering if it is worth it to init hoisted v-nodes lazily🥺

  • Hoisted v-nodes are now functions, and have to be called in every rendering, which introduces some new costs.
  • Hoisted v-nodes are created lazily in this PR, but they still can't be released after they are no longer used. So the improvement of memory usage may not be very significant.
  • Generated code becomes longer.

I don't know compilers and optimization at all and these are just my personal thoughts. Please point out if I'm wrong🫡.

@haoqunjiang haoqunjiang added 🔨 p3-minor-bug Priority 3: this fixes a bug, but is an edge case that only affects very specific usage. scope: compiler version: minor labels May 17, 2024
@sxzz sxzz marked this pull request as ready for review May 17, 2024 12:15
@sxzz sxzz added the ready for review This PR requires more reviews label May 17, 2024
@edison1105
Copy link
Member

edison1105 commented May 17, 2024

Actually, this PR will not fix #5256
I prefer to generate the following code to avoid calling hosted() every time.

export function render(_ctx, _cache) {
  return (_openBlock(), _createElementBlock("div", null, [
    _cache[1] || (_cache[1] = _hoisted_1()),
    _createTextVNode(_toDisplayString(_ctx.foo), 1 /* TEXT */),
    _cache[2] || (_cache[2] = _hoisted_2())
  ]))
}

@antfu
Copy link
Member Author

antfu commented May 17, 2024

After a second thought, I think having a flag to opt in is probably better. The lazies might have benefit only when many components bundled into a single entry, where in normal app structure, components especially routes are splitter intro dynamic chunks, which by itself is lazy already. Maybe it can be an optimization for component libraries that really need this, but I also doubt if this would be useful to users' daily app

@skirtles-code
Copy link
Contributor

It wasn't immediately clear to me from the title and description, but this change affects all VNode-related hoisting, not just the hoisting of the VNodes themselves. e.g. Hoisted prop objects and dynamic prop arrays.

So this template code:

<div :data-a="msg">

would now generate:

const _hoisted_1 = /*#__PURE__*/ _hoistLazy(() => (["data-a"]))

instead of:

const _hoisted_1 = ["data-a"]

I'm wondering whether there's any real benefit here. Is creating a function really cheaper than creating the array directly? I did a very small (not very scientific) benchmark and it seemed that the function would be much slower. That was just measuring the cost of creating the function, not even taking into account the cost of calling it.

Has any benchmarking been done to confirm that this change actually yields a performance benefit?

I had a look through a few popular libraries to see how much hoisting they do. Element Plus had the most hoisting of the libraries I checked. It seems to have 186 hoisted 'things', but only 3 of those are VNodes. The majority are dynamic prop lists, so just arrays of strings.

@antfu
Copy link
Member Author

antfu commented May 18, 2024

Yeah, good points @skirtles-code, I shared the concerns as well. I didn't do much benchmark, but this was just a quick idea during my discussion with @sxzz, in which he might have better context for this).

I think at least this PR should not be merged as-is, and we either need to introduce a flag to opt-in, or/and to have a smart detection based on how heavy the hosted element is (in cases like markdown rendering might be useful, but I would also doubt it as VitePress already doing some optimization to remove static notes), not maybe this is not needed at all.

Making the PR as draft for now. But let's keep discussing / evaluations.

@antfu
Copy link
Member Author

antfu commented May 18, 2024

For a bit more context, this was brought up when we were discussing about the up-front initiation cost and GC for hoisted objects that the hoisted approach introduced. Without hoisting, all those objects were created with as the component being used, and disposed by GC when all instances of the component get unreferenced. I suppose this shouldn't be a big issue for the majority of the apps, but I think it might still be worth exploring for optimizations if the trade-offs are good enough.

There this PR only addressed the up-front initiation cost but not GC for hoisted objects as @KermanX pointed out.

As typing my comments, I just came up with another idea:

Maybe instead of turning each hoisted object into a function, we might wrap the entire component along with the hoisted into a lazy function? Persudo code:

let __component
export default defineLazyComponent(() => {
  // use cache
  if (__component) return __component

  // imaginal hook
  onAllInstancedUnmounted(() => {
    // feel the memory
    __component = undefined
  })

  // the rest are normal component dist
  // and we put hoisted values here

  const _hoisted_1 = /*#__PURE__*/ _createElementVNode('div', null, 'Hi', -1 /* HOISTED */)
  const _hoisted_2 = ['data-id']

  __component = {
    __name: 'App',
    setup(__props) {
      const msg = ref('Hello World!')

      return (_ctx, _cache) => {
        return (
          _openBlock(),
          _createElementBlock(
            _Fragment,
            null,
            [_createElementVNode('h1', null, _toDisplayString(msg.value), 1 /* TEXT */), _hoisted_1, _createElementVNode('div', { 'data-id': msg.value }, 'Hello World', 8 /* PROPS */, _hoisted_2)],
            64 /* STABLE_FRAGMENT */,
          )
        )
      }
    },
  }

  return __component
})

That said, I suppose it still deps on if we really this and probably not by default for everything.

@yyx990803
Copy link
Member

Thanks a lot for bringing this up. After looking at this and #5256 in depth, I felt the best solution is to actually move away from hoisting vnodes altogether, but rather cache them per-instance. See #11067

@yyx990803 yyx990803 closed this Jun 4, 2024
@edison1105 edison1105 deleted the perf/hoist-lazy branch October 21, 2024 07:37
@edison1105 edison1105 restored the perf/hoist-lazy branch October 21, 2024 08:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🔨 p3-minor-bug Priority 3: this fixes a bug, but is an edge case that only affects very specific usage. ready for review This PR requires more reviews scope: compiler version: minor
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Memory leak in hoist static
7 participants