feat: context handling performance & memory improvements #1336
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Previously, we were parsing variables and passing them as byte slice to the engine. Then engine then parsed the variables into an astjson object and started fetching data. As we're loading data layer by layer, we had to create a "context" object of available data at this layer which can be used to render the input templates for each fetch.
With this rewrite, variables are immediately parsed into an astjson object. As we get new data from a fetch, we parse it into astjson format and merge it with the existing data at this layer. Then, we're not marshalling it into a json byte slice before passing it to the templating engine, but we're passing the astjson object directly, as the templating engine now accepts astjson format.
As a result, we're saving one marshalling of layer data per fetch, which reduces memory usage and saves some cpu cycles, as the templating engine also doesn't have to parse the json but can immediately access the ast json object.