Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve TOKENIZER by 23% #668

Merged
merged 1 commit into from
Jun 21, 2023
Merged

Improve TOKENIZER by 23% #668

merged 1 commit into from
Jun 21, 2023

Conversation

kbrock
Copy link
Contributor

@kbrock kbrock commented Jun 13, 2023

From what I can see, this is done in linear time: 4*O(n)
This tokenizer change converts that to something a little quicker: 3*O(n)

Seems that not using a capture group and something other than split would be a big win. Other than that, the changes were meager.

I used https://regex101.com/ (and pcre2) to evaluate the cost of the TOKENIZER. I verified with cruby 3.0.6 (by eyeball - nothing too extensive)

I tried a few changes to the regular expression and the example in the issue. I was able to speed up by 23% with minimal changes to the codebase. Other savings were to be had but request feedback before going that route.

 /(%%\{[^\}]+\}|%\{[^\}]+\})/ =~ '%{{'*9999)+'}'

/(%%\{[^\}]+\}|%\{[^\}]+\})/ ==> 129,990 steps
/(%?%\{[^\}]+\})/            ==> 129,990 steps
/(%%?\{[^\}]+\})/            ==>  99,992 steps (simple savings of 25%) <===
/(%%?\{[^%}{]+\})/           ==>  89,993 steps (limiting variable contents has minimal gains)

There really isn't much room for improvement overall. The null/simple cases seem to speak for themselves:

/x/ =~ '%{{'*9999)+'}'
/x/                          ==>  29,998 steps
/(x)/                        ==>  59,996 steps
/%{x/                        ==>  49,998 steps
/(%%?{x)/                    ==>  89,993 steps

And the plain string that doesn't fair too much worse than the specially crafted string. So this suggests that if there is a vulnerability in the regular expression, it is not expressed by this example. (especially since they all seem to be linear)

/x/ =~ 'abb'*9999+'c'

/x/                          ==>  29,999
/(%%?{x)/                    ==>  59,998
/(%%?\{[^\}]+\})/            ==>  59,998
/(%%\{[^\}]+\}|%\{[^\}]+\})/ ==>  89,997

per #667

From what I can see, this is done in linear time: 4*O(n)
This tokenizer change converts that to something a little quicker: 3*O(n)

Seems that not using a capture group and something other than split would be a big win.
Other than that, the changes were meager.

I used https://regex101.com/ (and pcre2) to evaluate the cost of the TOKENIZER.
I verified with cruby 3.0.6

```
 /(%%\{[^\}]+\}|%\{[^\}]+\})/ =~ '%{{'*9999)+'}'

/(%%\{[^\}]+\}|%\{[^\}]+\})/ ==> 129,990 steps
/(%?%\{[^\}]+\})/            ==> 129,990 steps
/(%%?\{[^\}]+\})/            ==>  99,992 steps (simple savings of 25%) <===
/(%%?\{[^%}{]+\})/           ==>  89,993 steps (limiting variable contents has minimal gains)
```

Also of note are the null/simple cases:

```
/x/ =~ '%{{'*9999)+'}'
/x/                          ==>  29,998 steps
/(x)/                        ==>  59,996 steps
/%{x/                        ==>  49,998 steps
/(%%?{x)/                    ==>  89,993 steps
```

And comparing against a the plain string of the same length.

```
/x/ =~ 'abb'*9999+'c'

/x/                          ==>  29,999
/(%%?{x)/                    ==>  59,998
/(%%?\{[^\}]+\})/            ==>  59,998
/(%%\{[^\}]+\}|%\{[^\}]+\})/ ==>  89,997
```

per ruby-i18n#667
@radar radar merged commit 0b07e58 into ruby-i18n:master Jun 21, 2023
@radar
Copy link
Collaborator

radar commented Jun 21, 2023

Thank you very much :)

@kbrock kbrock deleted the regex branch July 13, 2023 01:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants