Skip to content
This repository has been archived by the owner on Dec 13, 2018. It is now read-only.

Optimize debounced editor events #93

Closed
hansonw opened this issue Oct 18, 2017 · 8 comments
Closed

Optimize debounced editor events #93

hansonw opened this issue Oct 18, 2017 · 8 comments

Comments

@hansonw
Copy link
Contributor

hansonw commented Oct 18, 2017

We use RxJS to listen to (and debounce) various editor events. The hottest events are those that occur on every keystroke, namely:

  • buffer.onDidChange()
  • buffer.onDidChangeText()
  • editor.onDidChangeCursorPosition() / cursor.onDidChangePosition

Unfortunately it seems like RxJS debounceTime implementation (which uses its AsyncScheduler by default) seems to be relatively inefficient, taking up to a millisecond for each of the above events.

Flame graph from @nathansobo of a keypress event:

screen shot 2017-10-17 at 5 20 29 pm

Zip of @nathansobo's timeline profile: typing.zip

I put together a small benchmark where I create an observable and compare the cost of using debounceTime versus just using lodash.debounce (hopefully it's at least somewhat realistic):

https://gist.github.com/hansonw/f7b7f4b32011ee09916dc27a45a43845

With that in mind, it may make sense to enforce that we use observableFromSubscribeFunction in combination with a more standard debounce function rather than using debounceTime throughout the Atom IDE / Nuclide codebases.

@hansonw
Copy link
Contributor Author

hansonw commented Oct 18, 2017

My proposal might be debouncedObservableFromSubscribeFunction:

function debouncedObservableFromSubscribeFunction<T>(
  fn: SubscribeFunction<T>,
  debounceTime: number,
): Observable<T> {
  return observableFromSubscribeFunction(cb => {
    const debounced = debounce(cb, debounceTime);
    const disposable = fn(debounced);
    return new UniversalDisposable(disposable, debounced);
  });
}

@nathansobo
Copy link

  • buffer.onDidChange()
  • buffer.onDidChangeText()
  • editor.onDidChangeCursorPosition() / cursor.onDidChangePosition

Do you need to listen to both onDidChangeText and onDidChange? The former should summarize the latter. You really shouldn't listen to onDidChange at all if you can help it.

Another question, not sure about the nature of the code that is running here... do you actually need to debounce this event, or are you just worried about redundant calls in a given tick of the event loop? If so, might it be good enough to run the code in a nextTick or set a single timer in the nextTick.

Assuming you really need the full debounce behavior...

The results actually match what's shown in the profile very well, with the next() calls with debounceTime taking about ~1ms each while the more standard debounce takes about 0.16ms.

I wonder if lodash.debounce is maximally fast. Probably. 0.16ms feels about right just for the overhead of clearing and setting a timeout, which is unfortunately still pretty slow.

I really think it's worth a lot of effort to squeeze out every microsecond on the typing code path. With that in mind, let me throw an idea out there that may go too far. I have not coded this or benchmarked it or anything, so take it with a heap of salt...

Since repeatedly clearing and setting timeouts seems to be expensive, I wonder if a "good enough" debounce could be implemented with a single setInterval/clearInterval pair. The first call would set the interval with some fraction of the desired debounce delay. Subsequent calls would just mutate a timestamp. When our interval function runs, we only run the debounced function if we're within a window of our desired delay. It wouldn't be perfectly precise, but it would avoid the overhead of clearing and setting new timeouts on each call.

@matthewwithanm
Copy link
Contributor

I just did a quick test with this and the results are pretty comparable to the plain lodash one:

const debounce2 = (x, delay) => {
  return Observable.create(observer => {
    let canceled = false;
    const debounced = debounce(v => {
      if (!canceled) observer.next(v);
    }, delay);
    const sub = x.subscribe(debounced);
    sub.add(() => {
      canceled = true;
    });
    return sub;
  });
};

@hansonw
Copy link
Contributor Author

hansonw commented Oct 18, 2017

After converting everything to use a fastDebounce operator as @matthewwithanm described above, this looks much better:

screen shot 2017-10-17 at 9 09 49 pm

It's still about 0.5ms for the timer setup, but subsequent keystrokes within the debounce window are free! I don't know if I can feel the difference but this is a really easy change to make so we should probably do it.

This was particularly noticeable because we had one source wired up to debounce both text changes and cursor changes (for code highlights)!

@nathansobo
Copy link

Great to see. It definitely looks better. ✨

Can I ask again why you're handling onDidChange in addition to onDidChangeText? You'd have much less impact on multi-cursor performance if you only used onDidChangeText, as that event is batched.

@hansonw
Copy link
Contributor Author

hansonw commented Oct 18, 2017 via email

facebook-github-bot pushed a commit to facebookarchive/nuclide that referenced this issue Oct 19, 2017
…ceTime

Summary:
`Observable.debounceTime` is actually quite inefficient with its usage of `setInterval` / `clearInterval`: if you look at a profile, it will always clear and re-create an interval upon receiving a new event.

In contrast, our debounce implementation (like lodash's) re-uses a timer when possible and just resets its timestamp. When the timer fires, we'll create a new timer if necessary.

For very hot codepaths where we debounce things like editor events, every millisecond matters. When features like 'code highlight' debounce events from several streams, this can add up!

See facebookarchive/atom-ide-ui#93 for the investigation.

Reviewed By: matthewwithanm

Differential Revision: D6096145

fbshipit-source-id: 3569e2ce1b7cfc9e693962362ff80583de75e7d5
facebook-github-bot pushed a commit to facebookarchive/nuclide that referenced this issue Oct 20, 2017
Summary:
Due to the inefficiences found in `debounceTime`, I just went through and replaced all the single-argument `debounceTime` with `fastDebounce`.

There's a few callsites that also schedule on the animation frame, so I just preserved those.

See facebookarchive/atom-ide-ui#93 for the difference in flame charts for a keystroke after this change.

Reviewed By: matthewwithanm

Differential Revision: D6107843

fbshipit-source-id: 0597fbcfe260ebd15f259882401ed1d7acb75b76
hansonw added a commit that referenced this issue Oct 20, 2017
…ceTime

Summary:
`Observable.debounceTime` is actually quite inefficient with its usage of `setInterval` / `clearInterval`: if you look at a profile, it will always clear and re-create an interval upon receiving a new event.

In contrast, our debounce implementation (like lodash's) re-uses a timer when possible and just resets its timestamp. When the timer fires, we'll create a new timer if necessary.

For very hot codepaths where we debounce things like editor events, every millisecond matters. When features like 'code highlight' debounce events from several streams, this can add up!

See #93 for the investigation.

Reviewed By: matthewwithanm

Differential Revision: D6096145

fbshipit-source-id: 3569e2ce1b7cfc9e693962362ff80583de75e7d5
hansonw added a commit that referenced this issue Oct 20, 2017
Summary:
Due to the inefficiences found in `debounceTime`, I just went through and replaced all the single-argument `debounceTime` with `fastDebounce`.

There's a few callsites that also schedule on the animation frame, so I just preserved those.

See #93 for the difference in flame charts for a keystroke after this change.

Reviewed By: matthewwithanm

Differential Revision: D6107843

fbshipit-source-id: 0597fbcfe260ebd15f259882401ed1d7acb75b76
@hansonw
Copy link
Contributor Author

hansonw commented Oct 20, 2017

Fix is in v0.5.2.

@hansonw hansonw closed this as completed Oct 20, 2017
@nathansobo
Copy link

💥

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants