You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to reproduce and then adjust the data of the WMT2014 Benchmark. Therefore I cant use helm directly (if I understand it correctly). Therefore i want to use the dataset and the way how the prompt is constructed in my own pipeline. (same applies to the evaluation code)
Unfortunately I cant understand and find my way through the code.
I believe the PromptRunExpander is responsible for creating the prompt but i cant figure out what it exactly does and what values are used and where they are obtained from.
I would happy for any explenation on how the flow of information is and how the final prompt is constructed
The text was updated successfully, but these errors were encountered:
The main flow for the prompt generation happens in Runner.run_onehere. Specifically, this calls WMT14Scenario.get_instances (link) to download and load instances into memory, and it calls GenerationAdapter.generate_requests (link) to turn them into prompt strings, which are placed in RequestState.request.input.text.
You may also want to check out the documentation if you haven't already. Hope this helps.
I want to reproduce and then adjust the data of the WMT2014 Benchmark. Therefore I cant use helm directly (if I understand it correctly). Therefore i want to use the dataset and the way how the prompt is constructed in my own pipeline. (same applies to the evaluation code)
Unfortunately I cant understand and find my way through the code.
I believe the PromptRunExpander is responsible for creating the prompt but i cant figure out what it exactly does and what values are used and where they are obtained from.
I would happy for any explenation on how the flow of information is and how the final prompt is constructed
The text was updated successfully, but these errors were encountered: