You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Java-Parser is implemented in a style as closely resembling the official specifications.
This is done to increase confidence in it's correctness and ease the maintainability/upgradability when new versions of the specs
are released.
So the specification for CompilationUnit looks like:
The parser toolkit (Chevrotain) cannot be used to implement the grammar directly due to different parsing algorithims (LL vs LR if I recall correctly). So instead the parser is using backtracking to figure out which alternative to pick in many "places" in the grammar.
Simple naïve Backtracking is very inefficient, so the parser has slightly more optimized backtracking in many rules
where we try to limit how far forward in the grammar we look instead of the most naïve backtracking approach in
which a backtracked rule is fully parsed twice
Recent Discoveries
It seems like a great percentage of the parsing time is spent going "back and forward" due to backtracking.
Seems like ~23 backtracking happening (exceptions) per line of code.
Open Questions / More info needed
How bad is the performance impact due to backtracking? in other words how many times do
we reparse the same input?
Given an input of N tokens, how many times does CONSUME gets called (successfully/unsuccessfully) ? (1.5N / 3N / 10N) ?
Do the same parsing rules gets called multiple times on the same idx in the token stream?
Could be counted by logging tracing / logging the data for all SUBRULE invocations and grouping the results, e.g for each SUBRULE called, collect and later group up:
occurrenceIdx
tokenVector idx
sublrule name
rule parameters (if any).
More possible optimizations
Once we have some answer to the questions above we can consider options for more optimizations.
Basically if in the extreme case if we end attempting to parse the same subsets of input in (mostly) identical manners many times
It may be more optimal to save and cache the intermediate results and re-use those instead of re-parsing again.
Basically a kind of memoization.
It is also possible the performance problem is in a specific sub-set of the grammar (expressions grammar is often a performance hog). so optimizations may not need be targeted at the whole grammar.
The text was updated successfully, but these errors were encountered:
Background
The Java-Parser is implemented in a style as closely resembling the official specifications.
This is done to increase confidence in it's correctness and ease the maintainability/upgradability when new versions of the specs
are released.
So the specification for CompilationUnit looks like:
The Matching implementation in the parser would be very similar:
prettier-java/packages/java-parser/src/productions/packages-and-modules.js
Lines 6 to 45 in d20c1e3
The Problem and current mitigation
The parser toolkit (Chevrotain) cannot be used to implement the grammar directly due to different parsing algorithims (LL vs LR if I recall correctly). So instead the parser is using backtracking to figure out which alternative to pick in many "places" in the grammar.
Simple naïve Backtracking is very inefficient, so the parser has slightly more optimized backtracking in many rules
where we try to limit how far forward in the grammar we look instead of the most naïve backtracking approach in
which a backtracked rule is fully parsed twice
Recent Discoveries
It seems like a great percentage of the parsing time is spent going "back and forward" due to backtracking.
Open Questions / More info needed
How bad is the performance impact due to backtracking? in other words how many times do
we reparse the same input?
SUBRULE
invocations and grouping the results, e.g for eachSUBRULE
called, collect and later group up:occurrenceIdx
More possible optimizations
Once we have some answer to the questions above we can consider options for more optimizations.
Basically if in the extreme case if we end attempting to parse the same subsets of input in (mostly) identical manners many times
It may be more optimal to save and cache the intermediate results and re-use those instead of re-parsing again.
Basically a kind of memoization.
It is also possible the performance problem is in a specific sub-set of the grammar (expressions grammar is often a performance hog). so optimizations may not need be targeted at the whole grammar.
The text was updated successfully, but these errors were encountered: