-
Notifications
You must be signed in to change notification settings - Fork 266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add MechanicalTurkCritiqueClient with CSV import / export #1539
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @yifanmai! Overall, the implementation makes sense. I just have a few concerns:
- There seems to be a bug with newlines when exporting the data CSV file (see PR comment for an example).
- The files seem to be outputted to
mturk/<metric name>
. I think they would get overwritten when running multiple scenarios/models that use the same metrics. - Do we specify
description
andreward
in HELM or the MTurk UI when we create a project? - Do we need the Mongo cache to keep track of which scenarios/models we export files for and have results for?
self.critique_client = SurgeAICritiqueClient(surgeai_credentials, self._build_cache_config("surgeai")) | ||
else: | ||
raise ValueError( | ||
"CritiqueClient is not configured; set critiqueType to 'mturk', 'mturk-sandbox', 'surgeai' or 'random'" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think mturk-sandbox
is not supported unless we change the above to "mturk" in critique_type
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed mturk-sandbox
. You can still use the sandbox with this integration though.
if critique_type == "random": | ||
self.critique_client = RandomCritiqueClient() | ||
elif critique_type == "mturk": | ||
self.critique_client = MechanicalTurkCritiqueClient() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the MechanicalTurk client require a cache to keep track of HITs and results that were imported?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't use the cache. When we import, we load the request results into memory instead of loading it into the cache. This is mostly because our current Cache
abstraction does not support modifying existing entries, and the results will change over time (e.g. if requesting 2 respondents, the result may initially have just 1 respondent if only one has responded so far, and later on it may have 2 responses after a second respondent has responded). Note that this means that your memory has to be large enough to hold all the request and request result pairs.
# Note: Instructions can contain HTML. | ||
instructions="Rate the following summary.\n\nOriginal text:\n{{original_text}}\n\nSummary: {{summary}}", | ||
description="Rate summaries generated by a large language model", | ||
reward="1.00", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the dollar amount pay for a single instance/summary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed (not used any more)
@@ -23,30 +23,34 @@ class SummarizationCritiqueMetric(Metric): | |||
|
|||
def __init__(self, num_respondents: int) -> None: | |||
self._template = CritiqueTaskTemplate( | |||
name="HELM Summarization Evaluation", | |||
name="summarization_critique", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to include the full run spec and maybe even the model name in name
? Otherwise, would the output template and data files get overwritten if we're evaluating multiple models or even scenarios?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A single execution of helm-run
with multiple models and run specs will result with only a single output layout and CSV file. This is because we maintain a singleton exporter per template.
It was previously true that multiple invocations of helm-run
would result in the files getting overwritten. I updated the PR to fix this by appending a unique(-ish) timestamp to the generated file.
"""Convert from Surge AI template tag format to Mechanical Turk template tag format. | ||
|
||
{{field}} -> ${field}""" | ||
return re.sub(r"{{([^{}]+)}}", "${\\1}", raw_text) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The formatting is correct; here's what it looks like after importing into Mechanical Turk. (I added more newlines for illustration.)
Note that newlines in raw HTML are not displayed the end user (this is just how HTML works). You will have to use <p></p>
, <br>
or style="white-space: pre-wrap;"
if you want to render newlines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, that makes sense.. thanks!
src/helm/common/critique_request.py
Outdated
@@ -80,3 +100,9 @@ class CritiqueRequestResult: | |||
|
|||
responses: List[CritiqueResponse] | |||
"""List of respondents' responses.""" | |||
|
|||
|
|||
def populate_template_with_fields(template: str, fields: Dict[str, str]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this return type str
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed (not used any more).
src/helm/common/critique_request.py
Outdated
|
||
Only used by Mechanical Turk.""" | ||
|
||
reward: str |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we specify description
and reward
here, how do we also specify worker requirements? Are these fields being used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed (not used any more).
The worker requirements are specified in the web UI.
…-crfm/benchmarking into yifanmai/1330-mturk-csv
Addresses #1330