-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add cache for more efficient refs #399
Milestone
Comments
mvanaken
added a commit
that referenced
this issue
Oct 24, 2023
The limit specifies the size of the result list. We can stop the filter check once we hit the limit size.
mvanaken
added a commit
that referenced
this issue
Oct 24, 2023
…g is turned off. This is needed e.g. for the scope value expression, where the ParseState order is temporarily limited. The caching is then not usefull, since it contains the caching for the whole graph. When `withOrder` is called, we need to turn the caching off.
mvanaken
added a commit
that referenced
this issue
Oct 24, 2023
mvanaken
added a commit
that referenced
this issue
Oct 24, 2023
The cache for ties should match the used parsegraph. We should not use the cache of the original parsestate (returnParseState), but the cache of the newest parsestate (within the environment).
mvanaken
added a commit
that referenced
this issue
Oct 24, 2023
rdvdijk
added a commit
that referenced
this issue
Oct 26, 2023
mvanaken
added a commit
that referenced
this issue
Nov 2, 2023
To specifically test cache behaviour, without the use of parsing and to increase test coverage.
mvanaken
added a commit
that referenced
this issue
Nov 2, 2023
mvanaken
added a commit
that referenced
this issue
Nov 2, 2023
And converted limitTest to a ParameterizedTest, else PMD argues that the method does not contain any asserts...
mvanaken
added a commit
that referenced
this issue
Nov 3, 2023
mvanaken
added a commit
that referenced
this issue
Nov 24, 2023
…l as head. Co-authored-by: jvdb <jeroen@infix.ai>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Using
ref
is sometimes extremely expensive when referencing values parsed very "early" in largeParseGraph
s. This is a common pitfall in many tokens. We can improve this by creating a cache for all parsed values, which is used by evaluating aref
.An example: the CFB format contains a 'sector size' in its header, which is used throughout the token. The lookup of this sector size becomes extremely slow in CFB with large sector chains.
The text was updated successfully, but these errors were encountered: