Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better Big O #7

Merged
merged 9 commits into from
May 3, 2024
Merged

Better Big O #7

merged 9 commits into from
May 3, 2024

Conversation

shanecelis
Copy link
Collaborator

Hey Elm,

This is a performance enhancement and maybe a small behavioral fix.

I've done some work on trie-rs which has finally come to fruition. A trie is the right data structure for what we're doing. It's look ups are O(m log n) where m is the key chord count and n is the max number of leaves to a node in the trie. This isn't bad, but we do it once per key and we also check all our postfixes buffer[i..] for i \in [0, m], which I realize now may actually be slightly incorrect behavior I'll explain below. So our actual big O probably more like O(m^2 log n).

Now the silly thing is we should be able to do everything we need for each key press in O(log n) if we just keep track of where we are in the trie; we just need to look at our leaves. And that's essentially what this patch does. I explain more about what's under the covers in the trie incremental search documentation.

Postfix Bug

If you recognize key sequences "W A S D" and "A S" and some one types "W A S P", you probably don't want to recognize any key sequences. Well, our current code would recognize that as a match for the "A S" sequence. This patch once it detects that there is no match for the sequence will reset and start looking for sequences starting with "P".

Allocating Less Too

Previously we allocated on each key press in consume_input using a Vec. We're now using an iterator so no allocation there.

Since this doesn't touch the API surface at all, you can probably just bump it to 0.4.1 I would think. But that's at your discretion of course.

Future Idea

One idea to consider for the future is making the key sequence that was pressed available somehow. Maybe something simple like this:

#[derive(Resource)]
struct LastKeySequence(Option<Vec<KeyChord>>)

But maybe that's too simple. What'd be best is if we could optionally pass the key sequence to the system we're invoking. But I don't really know how to do that.

@not-elm not-elm self-requested a review May 1, 2024 13:23
@not-elm
Copy link
Owner

not-elm commented May 1, 2024

I have created a simple benchmark.

I checked and performance was improved, so this fix seems not a problem.

before(master)     :    [826.48 µs 841.84 µs 856.72 µs]
after(this branch) :    [783.06 µs 798.71 µs 819.33 µs]

Regarding Future Idea: what is the use of LastKeySequence, for example?

@shanecelis
Copy link
Collaborator Author

That's really cool benchmark code. You're a wizard with these test harnesses, @not-elm. I wouldn't have known how to start that at all.

Re: LastKeySequence. It's admittedly for bevy_minibuffer. Sometimes commands want to know what key sequence fired them as there can be multiple bindings for one command. Maybe it'd be better to have some kind of "hook" that can run so the user can create things like that for themselves. Some kind of FnMut(KeyChord, trie_rs::inc_search::Answer) maybe.

@not-elm not-elm mentioned this pull request May 3, 2024
@not-elm
Copy link
Owner

not-elm commented May 3, 2024

Hmm, sorry, but I honestly can't yet imagine how Hook would work.
I have created a dedicated issue here. Can you give us a simple sample code there?

Also, I will merge this PR itself.

Thank you!

@not-elm not-elm merged commit 5a43f8e into not-elm:master May 3, 2024
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants